00:00:00.001 Started by upstream project "autotest-per-patch" build number 126190 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.116 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.116 The recommended git tool is: git 00:00:00.116 using credential 00000000-0000-0000-0000-000000000002 00:00:00.118 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.146 Fetching changes from the remote Git repository 00:00:00.150 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.184 Using shallow fetch with depth 1 00:00:00.184 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.184 > git --version # timeout=10 00:00:00.207 > git --version # 'git version 2.39.2' 00:00:00.207 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.224 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.224 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.644 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.655 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.665 Checking out Revision 7caca6989ac753a10259529aadac5754060382af (FETCH_HEAD) 00:00:06.665 > git config core.sparsecheckout # timeout=10 00:00:06.678 > git read-tree -mu HEAD # timeout=10 00:00:06.694 > git checkout -f 7caca6989ac753a10259529aadac5754060382af # timeout=5 00:00:06.718 Commit message: "jenkins/jjb-config: Purge centos leftovers" 00:00:06.718 > git rev-list --no-walk d49304e16352441ae7eebb2419125dd094201f3e # timeout=10 00:00:06.828 [Pipeline] Start of Pipeline 00:00:06.845 [Pipeline] library 00:00:06.846 Loading library shm_lib@master 00:00:06.847 Library shm_lib@master is cached. Copying from home. 00:00:06.870 [Pipeline] node 00:00:06.890 Running on WFP5 in /var/jenkins/workspace/nvmf-phy-autotest 00:00:06.892 [Pipeline] { 00:00:06.904 [Pipeline] catchError 00:00:06.906 [Pipeline] { 00:00:06.921 [Pipeline] wrap 00:00:06.933 [Pipeline] { 00:00:06.942 [Pipeline] stage 00:00:06.944 [Pipeline] { (Prologue) 00:00:07.135 [Pipeline] sh 00:00:07.445 + logger -p user.info -t JENKINS-CI 00:00:07.466 [Pipeline] echo 00:00:07.468 Node: WFP5 00:00:07.476 [Pipeline] sh 00:00:07.770 [Pipeline] setCustomBuildProperty 00:00:07.778 [Pipeline] echo 00:00:07.780 Cleanup processes 00:00:07.783 [Pipeline] sh 00:00:08.062 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:08.062 2565330 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:08.075 [Pipeline] sh 00:00:08.351 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:08.351 ++ grep -v 'sudo pgrep' 00:00:08.351 ++ awk '{print $1}' 00:00:08.351 + sudo kill -9 00:00:08.351 + true 00:00:08.362 [Pipeline] cleanWs 00:00:08.370 [WS-CLEANUP] Deleting project workspace... 00:00:08.370 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.376 [WS-CLEANUP] done 00:00:08.378 [Pipeline] setCustomBuildProperty 00:00:08.388 [Pipeline] sh 00:00:08.666 + sudo git config --global --replace-all safe.directory '*' 00:00:08.752 [Pipeline] httpRequest 00:00:08.793 [Pipeline] echo 00:00:08.794 Sorcerer 10.211.164.101 is alive 00:00:08.800 [Pipeline] httpRequest 00:00:08.803 HttpMethod: GET 00:00:08.804 URL: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:08.805 Sending request to url: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:08.817 Response Code: HTTP/1.1 200 OK 00:00:08.817 Success: Status code 200 is in the accepted range: 200,404 00:00:08.817 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:13.631 [Pipeline] sh 00:00:13.945 + tar --no-same-owner -xf jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:13.963 [Pipeline] httpRequest 00:00:13.993 [Pipeline] echo 00:00:13.995 Sorcerer 10.211.164.101 is alive 00:00:14.005 [Pipeline] httpRequest 00:00:14.010 HttpMethod: GET 00:00:14.010 URL: http://10.211.164.101/packages/spdk_bd4841ef7e9d2effb31fb7a812842ddd2ffe65db.tar.gz 00:00:14.011 Sending request to url: http://10.211.164.101/packages/spdk_bd4841ef7e9d2effb31fb7a812842ddd2ffe65db.tar.gz 00:00:14.037 Response Code: HTTP/1.1 200 OK 00:00:14.037 Success: Status code 200 is in the accepted range: 200,404 00:00:14.038 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/spdk_bd4841ef7e9d2effb31fb7a812842ddd2ffe65db.tar.gz 00:02:03.183 [Pipeline] sh 00:02:03.458 + tar --no-same-owner -xf spdk_bd4841ef7e9d2effb31fb7a812842ddd2ffe65db.tar.gz 00:02:06.000 [Pipeline] sh 00:02:06.279 + git -C spdk log --oneline -n5 00:02:06.279 bd4841ef7 autopackage: Replace SPDK_TEST_RELEASE_BUILD with SPDK_TEST_PACKAGING 00:02:06.279 719d03c6a sock/uring: only register net impl if supported 00:02:06.279 e64f085ad vbdev_lvol_ut: unify usage of dummy base bdev 00:02:06.279 9937c0160 lib/rdma: bind TRACE_BDEV_IO_START/DONE to OBJECT_NVMF_RDMA_IO 00:02:06.279 6c7c1f57e accel: add sequence outstanding stat 00:02:06.290 [Pipeline] } 00:02:06.304 [Pipeline] // stage 00:02:06.310 [Pipeline] stage 00:02:06.312 [Pipeline] { (Prepare) 00:02:06.330 [Pipeline] writeFile 00:02:06.348 [Pipeline] sh 00:02:06.630 + logger -p user.info -t JENKINS-CI 00:02:06.643 [Pipeline] sh 00:02:06.922 + logger -p user.info -t JENKINS-CI 00:02:06.934 [Pipeline] sh 00:02:07.222 + cat autorun-spdk.conf 00:02:07.222 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:07.222 SPDK_TEST_NVMF=1 00:02:07.222 SPDK_TEST_NVME_CLI=1 00:02:07.222 SPDK_TEST_NVMF_NICS=mlx5 00:02:07.222 SPDK_RUN_UBSAN=1 00:02:07.222 NET_TYPE=phy 00:02:07.230 RUN_NIGHTLY=0 00:02:07.235 [Pipeline] readFile 00:02:07.252 [Pipeline] withEnv 00:02:07.253 [Pipeline] { 00:02:07.265 [Pipeline] sh 00:02:07.545 + set -ex 00:02:07.545 + [[ -f /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf ]] 00:02:07.545 + source /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:02:07.545 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:07.545 ++ SPDK_TEST_NVMF=1 00:02:07.545 ++ SPDK_TEST_NVME_CLI=1 00:02:07.545 ++ SPDK_TEST_NVMF_NICS=mlx5 00:02:07.545 ++ SPDK_RUN_UBSAN=1 00:02:07.545 ++ NET_TYPE=phy 00:02:07.545 ++ RUN_NIGHTLY=0 00:02:07.545 + case $SPDK_TEST_NVMF_NICS in 00:02:07.545 + DRIVERS=mlx5_ib 00:02:07.545 + [[ -n mlx5_ib ]] 00:02:07.545 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:02:07.545 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:02:07.545 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:02:07.545 rmmod: ERROR: Module irdma is not currently loaded 00:02:07.545 rmmod: ERROR: Module i40iw is not currently loaded 00:02:07.545 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:02:07.545 + true 00:02:07.545 + for D in $DRIVERS 00:02:07.545 + sudo modprobe mlx5_ib 00:02:07.803 + exit 0 00:02:07.812 [Pipeline] } 00:02:07.826 [Pipeline] // withEnv 00:02:07.830 [Pipeline] } 00:02:07.844 [Pipeline] // stage 00:02:07.852 [Pipeline] catchError 00:02:07.854 [Pipeline] { 00:02:07.869 [Pipeline] timeout 00:02:07.869 Timeout set to expire in 1 hr 0 min 00:02:07.871 [Pipeline] { 00:02:07.885 [Pipeline] stage 00:02:07.887 [Pipeline] { (Tests) 00:02:07.902 [Pipeline] sh 00:02:08.181 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-phy-autotest 00:02:08.181 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest 00:02:08.181 + DIR_ROOT=/var/jenkins/workspace/nvmf-phy-autotest 00:02:08.181 + [[ -n /var/jenkins/workspace/nvmf-phy-autotest ]] 00:02:08.181 + DIR_SPDK=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:08.181 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-phy-autotest/output 00:02:08.181 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/spdk ]] 00:02:08.181 + [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:02:08.181 + mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/output 00:02:08.181 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:02:08.181 + [[ nvmf-phy-autotest == pkgdep-* ]] 00:02:08.181 + cd /var/jenkins/workspace/nvmf-phy-autotest 00:02:08.181 + source /etc/os-release 00:02:08.181 ++ NAME='Fedora Linux' 00:02:08.181 ++ VERSION='38 (Cloud Edition)' 00:02:08.181 ++ ID=fedora 00:02:08.181 ++ VERSION_ID=38 00:02:08.181 ++ VERSION_CODENAME= 00:02:08.181 ++ PLATFORM_ID=platform:f38 00:02:08.181 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:02:08.181 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:08.181 ++ LOGO=fedora-logo-icon 00:02:08.181 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:02:08.181 ++ HOME_URL=https://fedoraproject.org/ 00:02:08.181 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:02:08.182 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:08.182 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:08.182 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:08.182 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:02:08.182 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:08.182 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:02:08.182 ++ SUPPORT_END=2024-05-14 00:02:08.182 ++ VARIANT='Cloud Edition' 00:02:08.182 ++ VARIANT_ID=cloud 00:02:08.182 + uname -a 00:02:08.182 Linux spdk-wfp-05 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:02:08.182 + sudo /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:02:10.710 Hugepages 00:02:10.710 node hugesize free / total 00:02:10.710 node0 1048576kB 0 / 0 00:02:10.710 node0 2048kB 0 / 0 00:02:10.710 node1 1048576kB 0 / 0 00:02:10.710 node1 2048kB 0 / 0 00:02:10.710 00:02:10.710 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:10.710 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:02:10.710 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:02:10.710 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:02:10.710 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:02:10.710 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:02:10.710 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:02:10.710 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:02:10.710 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:02:10.710 NVMe 0000:5f:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:02:10.710 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:02:10.710 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:02:10.710 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:02:10.710 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:02:10.710 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:02:10.710 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:02:10.710 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:02:10.710 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:02:10.710 + rm -f /tmp/spdk-ld-path 00:02:10.710 + source autorun-spdk.conf 00:02:10.710 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:10.710 ++ SPDK_TEST_NVMF=1 00:02:10.710 ++ SPDK_TEST_NVME_CLI=1 00:02:10.710 ++ SPDK_TEST_NVMF_NICS=mlx5 00:02:10.710 ++ SPDK_RUN_UBSAN=1 00:02:10.710 ++ NET_TYPE=phy 00:02:10.710 ++ RUN_NIGHTLY=0 00:02:10.710 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:10.710 + [[ -n '' ]] 00:02:10.710 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:10.710 + for M in /var/spdk/build-*-manifest.txt 00:02:10.710 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:10.710 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:02:10.710 + for M in /var/spdk/build-*-manifest.txt 00:02:10.710 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:10.710 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:02:10.710 ++ uname 00:02:10.710 + [[ Linux == \L\i\n\u\x ]] 00:02:10.710 + sudo dmesg -T 00:02:10.710 + sudo dmesg --clear 00:02:10.710 + dmesg_pid=2566776 00:02:10.710 + [[ Fedora Linux == FreeBSD ]] 00:02:10.710 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:10.710 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:10.710 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:10.710 + sudo dmesg -Tw 00:02:10.710 + [[ -x /usr/src/fio-static/fio ]] 00:02:10.710 + export FIO_BIN=/usr/src/fio-static/fio 00:02:10.710 + FIO_BIN=/usr/src/fio-static/fio 00:02:10.710 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:10.710 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:10.710 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:10.710 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:10.710 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:10.710 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:10.710 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:10.710 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:10.710 + spdk/autorun.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:02:10.710 Test configuration: 00:02:10.710 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:10.710 SPDK_TEST_NVMF=1 00:02:10.710 SPDK_TEST_NVME_CLI=1 00:02:10.710 SPDK_TEST_NVMF_NICS=mlx5 00:02:10.710 SPDK_RUN_UBSAN=1 00:02:10.710 NET_TYPE=phy 00:02:10.967 RUN_NIGHTLY=0 14:36:44 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:02:10.967 14:36:44 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:10.967 14:36:44 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:10.967 14:36:44 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:10.967 14:36:44 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:10.967 14:36:44 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:10.968 14:36:44 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:10.968 14:36:44 -- paths/export.sh@5 -- $ export PATH 00:02:10.968 14:36:44 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:10.968 14:36:44 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:02:10.968 14:36:44 -- common/autobuild_common.sh@444 -- $ date +%s 00:02:10.968 14:36:44 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721047004.XXXXXX 00:02:10.968 14:36:44 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721047004.u2yIIB 00:02:10.968 14:36:44 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:02:10.968 14:36:44 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:02:10.968 14:36:44 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/' 00:02:10.968 14:36:44 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:02:10.968 14:36:44 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:02:10.968 14:36:44 -- common/autobuild_common.sh@460 -- $ get_config_params 00:02:10.968 14:36:44 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:02:10.968 14:36:44 -- common/autotest_common.sh@10 -- $ set +x 00:02:10.968 14:36:44 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:02:10.968 14:36:44 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:02:10.968 14:36:44 -- pm/common@17 -- $ local monitor 00:02:10.968 14:36:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:10.968 14:36:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:10.968 14:36:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:10.968 14:36:44 -- pm/common@21 -- $ date +%s 00:02:10.968 14:36:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:10.968 14:36:44 -- pm/common@21 -- $ date +%s 00:02:10.968 14:36:44 -- pm/common@25 -- $ sleep 1 00:02:10.968 14:36:44 -- pm/common@21 -- $ date +%s 00:02:10.968 14:36:44 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721047004 00:02:10.968 14:36:44 -- pm/common@21 -- $ date +%s 00:02:10.968 14:36:44 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721047004 00:02:10.968 14:36:44 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721047004 00:02:10.968 14:36:44 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721047004 00:02:10.968 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721047004_collect-cpu-load.pm.log 00:02:10.968 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721047004_collect-vmstat.pm.log 00:02:10.968 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721047004_collect-cpu-temp.pm.log 00:02:10.968 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721047004_collect-bmc-pm.bmc.pm.log 00:02:11.901 14:36:45 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:02:11.901 14:36:45 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:11.901 14:36:45 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:11.901 14:36:45 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:11.901 14:36:45 -- spdk/autobuild.sh@16 -- $ date -u 00:02:11.901 Mon Jul 15 12:36:45 PM UTC 2024 00:02:11.901 14:36:45 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:11.901 v24.09-pre-203-gbd4841ef7 00:02:11.901 14:36:45 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:11.901 14:36:45 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:11.901 14:36:45 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:11.901 14:36:45 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:11.901 14:36:45 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:11.901 14:36:45 -- common/autotest_common.sh@10 -- $ set +x 00:02:11.901 ************************************ 00:02:11.901 START TEST ubsan 00:02:11.901 ************************************ 00:02:11.901 14:36:45 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:02:11.901 using ubsan 00:02:11.901 00:02:11.901 real 0m0.000s 00:02:11.901 user 0m0.000s 00:02:11.901 sys 0m0.000s 00:02:11.901 14:36:45 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:11.901 14:36:45 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:11.901 ************************************ 00:02:11.901 END TEST ubsan 00:02:11.901 ************************************ 00:02:11.901 14:36:45 -- common/autotest_common.sh@1142 -- $ return 0 00:02:11.901 14:36:45 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:11.901 14:36:45 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:11.901 14:36:45 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:11.901 14:36:45 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:11.901 14:36:45 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:11.901 14:36:45 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:11.901 14:36:45 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:11.901 14:36:45 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:11.901 14:36:45 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-shared 00:02:12.159 Using default SPDK env in /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:02:12.159 Using default DPDK in /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:02:12.417 Using 'verbs' RDMA provider 00:02:25.556 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:35.533 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:35.791 Creating mk/config.mk...done. 00:02:35.791 Creating mk/cc.flags.mk...done. 00:02:35.791 Type 'make' to build. 00:02:35.791 14:37:09 -- spdk/autobuild.sh@69 -- $ run_test make make -j96 00:02:35.791 14:37:09 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:35.791 14:37:09 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:35.791 14:37:09 -- common/autotest_common.sh@10 -- $ set +x 00:02:35.791 ************************************ 00:02:35.791 START TEST make 00:02:35.791 ************************************ 00:02:35.791 14:37:09 make -- common/autotest_common.sh@1123 -- $ make -j96 00:02:36.355 make[1]: Nothing to be done for 'all'. 00:02:44.465 The Meson build system 00:02:44.465 Version: 1.3.1 00:02:44.465 Source dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk 00:02:44.465 Build dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp 00:02:44.465 Build type: native build 00:02:44.465 Program cat found: YES (/usr/bin/cat) 00:02:44.465 Project name: DPDK 00:02:44.465 Project version: 24.03.0 00:02:44.465 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:44.465 C linker for the host machine: cc ld.bfd 2.39-16 00:02:44.465 Host machine cpu family: x86_64 00:02:44.465 Host machine cpu: x86_64 00:02:44.465 Message: ## Building in Developer Mode ## 00:02:44.465 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:44.465 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:44.465 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:44.465 Program python3 found: YES (/usr/bin/python3) 00:02:44.465 Program cat found: YES (/usr/bin/cat) 00:02:44.465 Compiler for C supports arguments -march=native: YES 00:02:44.465 Checking for size of "void *" : 8 00:02:44.465 Checking for size of "void *" : 8 (cached) 00:02:44.465 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:44.465 Library m found: YES 00:02:44.465 Library numa found: YES 00:02:44.465 Has header "numaif.h" : YES 00:02:44.465 Library fdt found: NO 00:02:44.465 Library execinfo found: NO 00:02:44.465 Has header "execinfo.h" : YES 00:02:44.465 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:44.465 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:44.465 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:44.465 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:44.465 Run-time dependency openssl found: YES 3.0.9 00:02:44.465 Run-time dependency libpcap found: YES 1.10.4 00:02:44.465 Has header "pcap.h" with dependency libpcap: YES 00:02:44.465 Compiler for C supports arguments -Wcast-qual: YES 00:02:44.465 Compiler for C supports arguments -Wdeprecated: YES 00:02:44.465 Compiler for C supports arguments -Wformat: YES 00:02:44.465 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:44.465 Compiler for C supports arguments -Wformat-security: NO 00:02:44.465 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:44.465 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:44.465 Compiler for C supports arguments -Wnested-externs: YES 00:02:44.465 Compiler for C supports arguments -Wold-style-definition: YES 00:02:44.465 Compiler for C supports arguments -Wpointer-arith: YES 00:02:44.465 Compiler for C supports arguments -Wsign-compare: YES 00:02:44.465 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:44.465 Compiler for C supports arguments -Wundef: YES 00:02:44.465 Compiler for C supports arguments -Wwrite-strings: YES 00:02:44.465 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:44.465 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:44.465 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:44.465 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:44.465 Program objdump found: YES (/usr/bin/objdump) 00:02:44.465 Compiler for C supports arguments -mavx512f: YES 00:02:44.465 Checking if "AVX512 checking" compiles: YES 00:02:44.465 Fetching value of define "__SSE4_2__" : 1 00:02:44.465 Fetching value of define "__AES__" : 1 00:02:44.465 Fetching value of define "__AVX__" : 1 00:02:44.465 Fetching value of define "__AVX2__" : 1 00:02:44.465 Fetching value of define "__AVX512BW__" : 1 00:02:44.465 Fetching value of define "__AVX512CD__" : 1 00:02:44.465 Fetching value of define "__AVX512DQ__" : 1 00:02:44.465 Fetching value of define "__AVX512F__" : 1 00:02:44.466 Fetching value of define "__AVX512VL__" : 1 00:02:44.466 Fetching value of define "__PCLMUL__" : 1 00:02:44.466 Fetching value of define "__RDRND__" : 1 00:02:44.466 Fetching value of define "__RDSEED__" : 1 00:02:44.466 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:44.466 Fetching value of define "__znver1__" : (undefined) 00:02:44.466 Fetching value of define "__znver2__" : (undefined) 00:02:44.466 Fetching value of define "__znver3__" : (undefined) 00:02:44.466 Fetching value of define "__znver4__" : (undefined) 00:02:44.466 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:44.466 Message: lib/log: Defining dependency "log" 00:02:44.466 Message: lib/kvargs: Defining dependency "kvargs" 00:02:44.466 Message: lib/telemetry: Defining dependency "telemetry" 00:02:44.466 Checking for function "getentropy" : NO 00:02:44.466 Message: lib/eal: Defining dependency "eal" 00:02:44.466 Message: lib/ring: Defining dependency "ring" 00:02:44.466 Message: lib/rcu: Defining dependency "rcu" 00:02:44.466 Message: lib/mempool: Defining dependency "mempool" 00:02:44.466 Message: lib/mbuf: Defining dependency "mbuf" 00:02:44.466 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:44.466 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:44.466 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:44.466 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:44.466 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:44.466 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:44.466 Compiler for C supports arguments -mpclmul: YES 00:02:44.466 Compiler for C supports arguments -maes: YES 00:02:44.466 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:44.466 Compiler for C supports arguments -mavx512bw: YES 00:02:44.466 Compiler for C supports arguments -mavx512dq: YES 00:02:44.466 Compiler for C supports arguments -mavx512vl: YES 00:02:44.466 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:44.466 Compiler for C supports arguments -mavx2: YES 00:02:44.466 Compiler for C supports arguments -mavx: YES 00:02:44.466 Message: lib/net: Defining dependency "net" 00:02:44.466 Message: lib/meter: Defining dependency "meter" 00:02:44.466 Message: lib/ethdev: Defining dependency "ethdev" 00:02:44.466 Message: lib/pci: Defining dependency "pci" 00:02:44.466 Message: lib/cmdline: Defining dependency "cmdline" 00:02:44.466 Message: lib/hash: Defining dependency "hash" 00:02:44.466 Message: lib/timer: Defining dependency "timer" 00:02:44.466 Message: lib/compressdev: Defining dependency "compressdev" 00:02:44.466 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:44.466 Message: lib/dmadev: Defining dependency "dmadev" 00:02:44.466 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:44.466 Message: lib/power: Defining dependency "power" 00:02:44.466 Message: lib/reorder: Defining dependency "reorder" 00:02:44.466 Message: lib/security: Defining dependency "security" 00:02:44.466 Has header "linux/userfaultfd.h" : YES 00:02:44.466 Has header "linux/vduse.h" : YES 00:02:44.466 Message: lib/vhost: Defining dependency "vhost" 00:02:44.466 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:44.466 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:44.466 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:44.466 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:44.466 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:44.466 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:44.466 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:44.466 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:44.466 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:44.466 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:44.466 Program doxygen found: YES (/usr/bin/doxygen) 00:02:44.466 Configuring doxy-api-html.conf using configuration 00:02:44.466 Configuring doxy-api-man.conf using configuration 00:02:44.466 Program mandb found: YES (/usr/bin/mandb) 00:02:44.466 Program sphinx-build found: NO 00:02:44.466 Configuring rte_build_config.h using configuration 00:02:44.466 Message: 00:02:44.466 ================= 00:02:44.466 Applications Enabled 00:02:44.466 ================= 00:02:44.466 00:02:44.466 apps: 00:02:44.466 00:02:44.466 00:02:44.466 Message: 00:02:44.466 ================= 00:02:44.466 Libraries Enabled 00:02:44.466 ================= 00:02:44.466 00:02:44.466 libs: 00:02:44.466 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:44.466 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:44.466 cryptodev, dmadev, power, reorder, security, vhost, 00:02:44.466 00:02:44.466 Message: 00:02:44.466 =============== 00:02:44.466 Drivers Enabled 00:02:44.466 =============== 00:02:44.466 00:02:44.466 common: 00:02:44.466 00:02:44.466 bus: 00:02:44.466 pci, vdev, 00:02:44.466 mempool: 00:02:44.466 ring, 00:02:44.466 dma: 00:02:44.466 00:02:44.466 net: 00:02:44.466 00:02:44.466 crypto: 00:02:44.466 00:02:44.466 compress: 00:02:44.466 00:02:44.466 vdpa: 00:02:44.466 00:02:44.466 00:02:44.466 Message: 00:02:44.466 ================= 00:02:44.466 Content Skipped 00:02:44.466 ================= 00:02:44.466 00:02:44.466 apps: 00:02:44.466 dumpcap: explicitly disabled via build config 00:02:44.466 graph: explicitly disabled via build config 00:02:44.466 pdump: explicitly disabled via build config 00:02:44.466 proc-info: explicitly disabled via build config 00:02:44.466 test-acl: explicitly disabled via build config 00:02:44.466 test-bbdev: explicitly disabled via build config 00:02:44.466 test-cmdline: explicitly disabled via build config 00:02:44.466 test-compress-perf: explicitly disabled via build config 00:02:44.466 test-crypto-perf: explicitly disabled via build config 00:02:44.466 test-dma-perf: explicitly disabled via build config 00:02:44.466 test-eventdev: explicitly disabled via build config 00:02:44.466 test-fib: explicitly disabled via build config 00:02:44.466 test-flow-perf: explicitly disabled via build config 00:02:44.466 test-gpudev: explicitly disabled via build config 00:02:44.466 test-mldev: explicitly disabled via build config 00:02:44.466 test-pipeline: explicitly disabled via build config 00:02:44.466 test-pmd: explicitly disabled via build config 00:02:44.466 test-regex: explicitly disabled via build config 00:02:44.466 test-sad: explicitly disabled via build config 00:02:44.466 test-security-perf: explicitly disabled via build config 00:02:44.466 00:02:44.466 libs: 00:02:44.466 argparse: explicitly disabled via build config 00:02:44.466 metrics: explicitly disabled via build config 00:02:44.466 acl: explicitly disabled via build config 00:02:44.466 bbdev: explicitly disabled via build config 00:02:44.466 bitratestats: explicitly disabled via build config 00:02:44.466 bpf: explicitly disabled via build config 00:02:44.466 cfgfile: explicitly disabled via build config 00:02:44.466 distributor: explicitly disabled via build config 00:02:44.466 efd: explicitly disabled via build config 00:02:44.466 eventdev: explicitly disabled via build config 00:02:44.466 dispatcher: explicitly disabled via build config 00:02:44.466 gpudev: explicitly disabled via build config 00:02:44.466 gro: explicitly disabled via build config 00:02:44.466 gso: explicitly disabled via build config 00:02:44.466 ip_frag: explicitly disabled via build config 00:02:44.466 jobstats: explicitly disabled via build config 00:02:44.466 latencystats: explicitly disabled via build config 00:02:44.466 lpm: explicitly disabled via build config 00:02:44.466 member: explicitly disabled via build config 00:02:44.466 pcapng: explicitly disabled via build config 00:02:44.466 rawdev: explicitly disabled via build config 00:02:44.466 regexdev: explicitly disabled via build config 00:02:44.466 mldev: explicitly disabled via build config 00:02:44.466 rib: explicitly disabled via build config 00:02:44.466 sched: explicitly disabled via build config 00:02:44.466 stack: explicitly disabled via build config 00:02:44.466 ipsec: explicitly disabled via build config 00:02:44.466 pdcp: explicitly disabled via build config 00:02:44.466 fib: explicitly disabled via build config 00:02:44.466 port: explicitly disabled via build config 00:02:44.466 pdump: explicitly disabled via build config 00:02:44.466 table: explicitly disabled via build config 00:02:44.466 pipeline: explicitly disabled via build config 00:02:44.466 graph: explicitly disabled via build config 00:02:44.466 node: explicitly disabled via build config 00:02:44.466 00:02:44.466 drivers: 00:02:44.466 common/cpt: not in enabled drivers build config 00:02:44.466 common/dpaax: not in enabled drivers build config 00:02:44.466 common/iavf: not in enabled drivers build config 00:02:44.466 common/idpf: not in enabled drivers build config 00:02:44.466 common/ionic: not in enabled drivers build config 00:02:44.466 common/mvep: not in enabled drivers build config 00:02:44.466 common/octeontx: not in enabled drivers build config 00:02:44.466 bus/auxiliary: not in enabled drivers build config 00:02:44.466 bus/cdx: not in enabled drivers build config 00:02:44.466 bus/dpaa: not in enabled drivers build config 00:02:44.466 bus/fslmc: not in enabled drivers build config 00:02:44.466 bus/ifpga: not in enabled drivers build config 00:02:44.466 bus/platform: not in enabled drivers build config 00:02:44.466 bus/uacce: not in enabled drivers build config 00:02:44.466 bus/vmbus: not in enabled drivers build config 00:02:44.466 common/cnxk: not in enabled drivers build config 00:02:44.466 common/mlx5: not in enabled drivers build config 00:02:44.466 common/nfp: not in enabled drivers build config 00:02:44.466 common/nitrox: not in enabled drivers build config 00:02:44.466 common/qat: not in enabled drivers build config 00:02:44.466 common/sfc_efx: not in enabled drivers build config 00:02:44.466 mempool/bucket: not in enabled drivers build config 00:02:44.466 mempool/cnxk: not in enabled drivers build config 00:02:44.466 mempool/dpaa: not in enabled drivers build config 00:02:44.466 mempool/dpaa2: not in enabled drivers build config 00:02:44.466 mempool/octeontx: not in enabled drivers build config 00:02:44.466 mempool/stack: not in enabled drivers build config 00:02:44.466 dma/cnxk: not in enabled drivers build config 00:02:44.466 dma/dpaa: not in enabled drivers build config 00:02:44.466 dma/dpaa2: not in enabled drivers build config 00:02:44.466 dma/hisilicon: not in enabled drivers build config 00:02:44.466 dma/idxd: not in enabled drivers build config 00:02:44.466 dma/ioat: not in enabled drivers build config 00:02:44.466 dma/skeleton: not in enabled drivers build config 00:02:44.466 net/af_packet: not in enabled drivers build config 00:02:44.467 net/af_xdp: not in enabled drivers build config 00:02:44.467 net/ark: not in enabled drivers build config 00:02:44.467 net/atlantic: not in enabled drivers build config 00:02:44.467 net/avp: not in enabled drivers build config 00:02:44.467 net/axgbe: not in enabled drivers build config 00:02:44.467 net/bnx2x: not in enabled drivers build config 00:02:44.467 net/bnxt: not in enabled drivers build config 00:02:44.467 net/bonding: not in enabled drivers build config 00:02:44.467 net/cnxk: not in enabled drivers build config 00:02:44.467 net/cpfl: not in enabled drivers build config 00:02:44.467 net/cxgbe: not in enabled drivers build config 00:02:44.467 net/dpaa: not in enabled drivers build config 00:02:44.467 net/dpaa2: not in enabled drivers build config 00:02:44.467 net/e1000: not in enabled drivers build config 00:02:44.467 net/ena: not in enabled drivers build config 00:02:44.467 net/enetc: not in enabled drivers build config 00:02:44.467 net/enetfec: not in enabled drivers build config 00:02:44.467 net/enic: not in enabled drivers build config 00:02:44.467 net/failsafe: not in enabled drivers build config 00:02:44.467 net/fm10k: not in enabled drivers build config 00:02:44.467 net/gve: not in enabled drivers build config 00:02:44.467 net/hinic: not in enabled drivers build config 00:02:44.467 net/hns3: not in enabled drivers build config 00:02:44.467 net/i40e: not in enabled drivers build config 00:02:44.467 net/iavf: not in enabled drivers build config 00:02:44.467 net/ice: not in enabled drivers build config 00:02:44.467 net/idpf: not in enabled drivers build config 00:02:44.467 net/igc: not in enabled drivers build config 00:02:44.467 net/ionic: not in enabled drivers build config 00:02:44.467 net/ipn3ke: not in enabled drivers build config 00:02:44.467 net/ixgbe: not in enabled drivers build config 00:02:44.467 net/mana: not in enabled drivers build config 00:02:44.467 net/memif: not in enabled drivers build config 00:02:44.467 net/mlx4: not in enabled drivers build config 00:02:44.467 net/mlx5: not in enabled drivers build config 00:02:44.467 net/mvneta: not in enabled drivers build config 00:02:44.467 net/mvpp2: not in enabled drivers build config 00:02:44.467 net/netvsc: not in enabled drivers build config 00:02:44.467 net/nfb: not in enabled drivers build config 00:02:44.467 net/nfp: not in enabled drivers build config 00:02:44.467 net/ngbe: not in enabled drivers build config 00:02:44.467 net/null: not in enabled drivers build config 00:02:44.467 net/octeontx: not in enabled drivers build config 00:02:44.467 net/octeon_ep: not in enabled drivers build config 00:02:44.467 net/pcap: not in enabled drivers build config 00:02:44.467 net/pfe: not in enabled drivers build config 00:02:44.467 net/qede: not in enabled drivers build config 00:02:44.467 net/ring: not in enabled drivers build config 00:02:44.467 net/sfc: not in enabled drivers build config 00:02:44.467 net/softnic: not in enabled drivers build config 00:02:44.467 net/tap: not in enabled drivers build config 00:02:44.467 net/thunderx: not in enabled drivers build config 00:02:44.467 net/txgbe: not in enabled drivers build config 00:02:44.467 net/vdev_netvsc: not in enabled drivers build config 00:02:44.467 net/vhost: not in enabled drivers build config 00:02:44.467 net/virtio: not in enabled drivers build config 00:02:44.467 net/vmxnet3: not in enabled drivers build config 00:02:44.467 raw/*: missing internal dependency, "rawdev" 00:02:44.467 crypto/armv8: not in enabled drivers build config 00:02:44.467 crypto/bcmfs: not in enabled drivers build config 00:02:44.467 crypto/caam_jr: not in enabled drivers build config 00:02:44.467 crypto/ccp: not in enabled drivers build config 00:02:44.467 crypto/cnxk: not in enabled drivers build config 00:02:44.467 crypto/dpaa_sec: not in enabled drivers build config 00:02:44.467 crypto/dpaa2_sec: not in enabled drivers build config 00:02:44.467 crypto/ipsec_mb: not in enabled drivers build config 00:02:44.467 crypto/mlx5: not in enabled drivers build config 00:02:44.467 crypto/mvsam: not in enabled drivers build config 00:02:44.467 crypto/nitrox: not in enabled drivers build config 00:02:44.467 crypto/null: not in enabled drivers build config 00:02:44.467 crypto/octeontx: not in enabled drivers build config 00:02:44.467 crypto/openssl: not in enabled drivers build config 00:02:44.467 crypto/scheduler: not in enabled drivers build config 00:02:44.467 crypto/uadk: not in enabled drivers build config 00:02:44.467 crypto/virtio: not in enabled drivers build config 00:02:44.467 compress/isal: not in enabled drivers build config 00:02:44.467 compress/mlx5: not in enabled drivers build config 00:02:44.467 compress/nitrox: not in enabled drivers build config 00:02:44.467 compress/octeontx: not in enabled drivers build config 00:02:44.467 compress/zlib: not in enabled drivers build config 00:02:44.467 regex/*: missing internal dependency, "regexdev" 00:02:44.467 ml/*: missing internal dependency, "mldev" 00:02:44.467 vdpa/ifc: not in enabled drivers build config 00:02:44.467 vdpa/mlx5: not in enabled drivers build config 00:02:44.467 vdpa/nfp: not in enabled drivers build config 00:02:44.467 vdpa/sfc: not in enabled drivers build config 00:02:44.467 event/*: missing internal dependency, "eventdev" 00:02:44.467 baseband/*: missing internal dependency, "bbdev" 00:02:44.467 gpu/*: missing internal dependency, "gpudev" 00:02:44.467 00:02:44.467 00:02:44.467 Build targets in project: 85 00:02:44.467 00:02:44.467 DPDK 24.03.0 00:02:44.467 00:02:44.467 User defined options 00:02:44.467 buildtype : debug 00:02:44.467 default_library : shared 00:02:44.467 libdir : lib 00:02:44.467 prefix : /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:02:44.467 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:44.467 c_link_args : 00:02:44.467 cpu_instruction_set: native 00:02:44.467 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:02:44.467 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:02:44.467 enable_docs : false 00:02:44.467 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:44.467 enable_kmods : false 00:02:44.467 max_lcores : 128 00:02:44.467 tests : false 00:02:44.467 00:02:44.467 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:44.467 ninja: Entering directory `/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp' 00:02:44.735 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:44.735 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:44.735 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:44.735 [4/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:44.735 [5/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:44.735 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:44.735 [7/268] Linking static target lib/librte_kvargs.a 00:02:44.735 [8/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:44.735 [9/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:44.735 [10/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:44.735 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:44.735 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:44.735 [13/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:44.735 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:44.735 [15/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:44.735 [16/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:44.735 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:44.735 [18/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:44.735 [19/268] Linking static target lib/librte_log.a 00:02:44.996 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:44.996 [21/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:44.996 [22/268] Linking static target lib/librte_pci.a 00:02:44.996 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:44.996 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:44.996 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:44.996 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:45.254 [27/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:45.254 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:45.254 [29/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:45.254 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:45.254 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:45.254 [32/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:45.254 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:45.254 [34/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:45.254 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:45.254 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:45.254 [37/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:45.254 [38/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:45.254 [39/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:45.254 [40/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:45.254 [41/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.254 [42/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:45.254 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:45.254 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:45.254 [45/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:45.254 [46/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:45.254 [47/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:45.254 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:45.254 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:45.254 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:45.254 [51/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:45.254 [52/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:45.254 [53/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:45.254 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:45.254 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:45.254 [56/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:45.254 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:45.254 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:45.254 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:45.254 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:45.254 [61/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:45.254 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:45.254 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:45.254 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:45.254 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:45.254 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:45.254 [67/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:45.254 [68/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:45.254 [69/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:45.254 [70/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:45.254 [71/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:45.255 [72/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:45.255 [73/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:45.255 [74/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.255 [75/268] Linking static target lib/librte_meter.a 00:02:45.255 [76/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:45.255 [77/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:45.255 [78/268] Linking static target lib/librte_ring.a 00:02:45.255 [79/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:45.255 [80/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:45.255 [81/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:45.255 [82/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:45.255 [83/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:45.255 [84/268] Linking static target lib/librte_telemetry.a 00:02:45.255 [85/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:45.255 [86/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:45.255 [87/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:45.255 [88/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:45.255 [89/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:45.255 [90/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:45.255 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:45.255 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:45.255 [93/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:45.255 [94/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:45.255 [95/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:45.255 [96/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:45.255 [97/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:45.255 [98/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:45.512 [99/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:45.512 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:45.512 [101/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:45.512 [102/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:45.512 [103/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:45.512 [104/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:45.512 [105/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:45.512 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:45.512 [107/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:45.512 [108/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:45.512 [109/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:45.512 [110/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:45.512 [111/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:45.512 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:45.512 [113/268] Linking static target lib/librte_rcu.a 00:02:45.512 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:45.512 [115/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:45.512 [116/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:45.512 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:45.512 [118/268] Linking static target lib/librte_net.a 00:02:45.512 [119/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:45.512 [120/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:45.512 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:45.512 [122/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:45.512 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:45.512 [124/268] Linking static target lib/librte_eal.a 00:02:45.512 [125/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:45.512 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:45.512 [127/268] Linking static target lib/librte_cmdline.a 00:02:45.512 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:45.512 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:45.512 [130/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:45.512 [131/268] Linking static target lib/librte_mempool.a 00:02:45.512 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:45.512 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:45.512 [134/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.512 [135/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.512 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:45.512 [137/268] Linking target lib/librte_log.so.24.1 00:02:45.512 [138/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.512 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:45.512 [140/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:45.512 [141/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:45.768 [142/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:45.768 [143/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:45.768 [144/268] Linking static target lib/librte_mbuf.a 00:02:45.768 [145/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:45.768 [146/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:45.768 [147/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.768 [148/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.768 [149/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:45.768 [150/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.768 [151/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:45.768 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:45.768 [153/268] Linking static target lib/librte_timer.a 00:02:45.768 [154/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:45.768 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:45.768 [156/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:45.768 [157/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:45.768 [158/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:45.768 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:45.768 [160/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:45.768 [161/268] Linking static target lib/librte_dmadev.a 00:02:45.768 [162/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:45.768 [163/268] Linking target lib/librte_kvargs.so.24.1 00:02:45.768 [164/268] Linking target lib/librte_telemetry.so.24.1 00:02:45.768 [165/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:45.768 [166/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:45.768 [167/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:45.768 [168/268] Linking static target lib/librte_compressdev.a 00:02:45.768 [169/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:45.768 [170/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:45.768 [171/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:45.768 [172/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:45.768 [173/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:45.768 [174/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:45.769 [175/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:45.769 [176/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:45.769 [177/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:45.769 [178/268] Linking static target lib/librte_reorder.a 00:02:45.769 [179/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:46.026 [180/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:46.026 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:46.026 [182/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:46.026 [183/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:46.026 [184/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:46.026 [185/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:46.026 [186/268] Linking static target lib/librte_power.a 00:02:46.026 [187/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:46.026 [188/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:46.026 [189/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:46.026 [190/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:46.026 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:46.026 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:46.026 [193/268] Linking static target lib/librte_security.a 00:02:46.026 [194/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:46.026 [195/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:46.026 [196/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:46.026 [197/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:46.026 [198/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:46.026 [199/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:46.026 [200/268] Linking static target drivers/librte_bus_vdev.a 00:02:46.026 [201/268] Linking static target lib/librte_hash.a 00:02:46.026 [202/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:46.284 [203/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:46.284 [204/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.284 [205/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:46.284 [206/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:46.284 [207/268] Linking static target drivers/librte_bus_pci.a 00:02:46.284 [208/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:46.284 [209/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:46.284 [210/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:46.284 [211/268] Linking static target lib/librte_cryptodev.a 00:02:46.284 [212/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:46.284 [213/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.284 [214/268] Linking static target drivers/librte_mempool_ring.a 00:02:46.284 [215/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.284 [216/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.284 [217/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:46.541 [218/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.541 [219/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.541 [220/268] Linking static target lib/librte_ethdev.a 00:02:46.541 [221/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.541 [222/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.541 [223/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.798 [224/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.798 [225/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:46.798 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.055 [227/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.619 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:47.876 [229/268] Linking static target lib/librte_vhost.a 00:02:48.134 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.508 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.770 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.029 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.029 [234/268] Linking target lib/librte_eal.so.24.1 00:02:55.286 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:55.286 [236/268] Linking target lib/librte_meter.so.24.1 00:02:55.286 [237/268] Linking target lib/librte_pci.so.24.1 00:02:55.286 [238/268] Linking target lib/librte_ring.so.24.1 00:02:55.286 [239/268] Linking target lib/librte_dmadev.so.24.1 00:02:55.286 [240/268] Linking target lib/librte_timer.so.24.1 00:02:55.286 [241/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:55.286 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:55.286 [243/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:55.286 [244/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:55.543 [245/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:55.543 [246/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:55.543 [247/268] Linking target lib/librte_mempool.so.24.1 00:02:55.543 [248/268] Linking target lib/librte_rcu.so.24.1 00:02:55.543 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:55.543 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:55.543 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:55.543 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:55.543 [253/268] Linking target lib/librte_mbuf.so.24.1 00:02:55.802 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:55.802 [255/268] Linking target lib/librte_compressdev.so.24.1 00:02:55.802 [256/268] Linking target lib/librte_reorder.so.24.1 00:02:55.802 [257/268] Linking target lib/librte_net.so.24.1 00:02:55.802 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:02:55.802 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:55.802 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:56.060 [261/268] Linking target lib/librte_hash.so.24.1 00:02:56.060 [262/268] Linking target lib/librte_cmdline.so.24.1 00:02:56.060 [263/268] Linking target lib/librte_security.so.24.1 00:02:56.060 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:56.060 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:56.060 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:56.060 [267/268] Linking target lib/librte_power.so.24.1 00:02:56.060 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:56.060 INFO: autodetecting backend as ninja 00:02:56.060 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp -j 96 00:02:56.992 CC lib/ut_mock/mock.o 00:02:56.992 CC lib/log/log.o 00:02:56.992 CC lib/log/log_flags.o 00:02:56.992 CC lib/log/log_deprecated.o 00:02:57.254 CC lib/ut/ut.o 00:02:57.254 LIB libspdk_ut_mock.a 00:02:57.254 LIB libspdk_log.a 00:02:57.254 LIB libspdk_ut.a 00:02:57.254 SO libspdk_ut_mock.so.6.0 00:02:57.254 SO libspdk_log.so.7.0 00:02:57.254 SO libspdk_ut.so.2.0 00:02:57.254 SYMLINK libspdk_ut_mock.so 00:02:57.254 SYMLINK libspdk_log.so 00:02:57.254 SYMLINK libspdk_ut.so 00:02:57.656 CC lib/ioat/ioat.o 00:02:57.656 CC lib/dma/dma.o 00:02:57.656 CXX lib/trace_parser/trace.o 00:02:57.656 CC lib/util/base64.o 00:02:57.656 CC lib/util/bit_array.o 00:02:57.656 CC lib/util/cpuset.o 00:02:57.656 CC lib/util/crc16.o 00:02:57.656 CC lib/util/crc32.o 00:02:57.656 CC lib/util/crc32_ieee.o 00:02:57.656 CC lib/util/crc32c.o 00:02:57.656 CC lib/util/crc64.o 00:02:57.656 CC lib/util/fd.o 00:02:57.656 CC lib/util/dif.o 00:02:57.656 CC lib/util/iov.o 00:02:57.656 CC lib/util/file.o 00:02:57.656 CC lib/util/hexlify.o 00:02:57.656 CC lib/util/math.o 00:02:57.656 CC lib/util/string.o 00:02:57.656 CC lib/util/pipe.o 00:02:57.656 CC lib/util/strerror_tls.o 00:02:57.656 CC lib/util/uuid.o 00:02:57.656 CC lib/util/xor.o 00:02:57.656 CC lib/util/fd_group.o 00:02:57.656 CC lib/util/zipf.o 00:02:57.939 CC lib/vfio_user/host/vfio_user_pci.o 00:02:57.939 CC lib/vfio_user/host/vfio_user.o 00:02:57.939 LIB libspdk_dma.a 00:02:57.939 SO libspdk_dma.so.4.0 00:02:57.939 LIB libspdk_ioat.a 00:02:57.939 SYMLINK libspdk_dma.so 00:02:57.939 SO libspdk_ioat.so.7.0 00:02:57.939 LIB libspdk_vfio_user.a 00:02:57.939 SYMLINK libspdk_ioat.so 00:02:57.939 SO libspdk_vfio_user.so.5.0 00:02:57.939 SYMLINK libspdk_vfio_user.so 00:02:58.196 LIB libspdk_util.a 00:02:58.196 SO libspdk_util.so.9.1 00:02:58.196 SYMLINK libspdk_util.so 00:02:58.196 LIB libspdk_trace_parser.a 00:02:58.454 SO libspdk_trace_parser.so.5.0 00:02:58.454 SYMLINK libspdk_trace_parser.so 00:02:58.454 CC lib/json/json_util.o 00:02:58.454 CC lib/json/json_parse.o 00:02:58.454 CC lib/rdma_provider/common.o 00:02:58.454 CC lib/json/json_write.o 00:02:58.454 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:58.454 CC lib/env_dpdk/env.o 00:02:58.455 CC lib/env_dpdk/pci.o 00:02:58.455 CC lib/env_dpdk/memory.o 00:02:58.455 CC lib/env_dpdk/init.o 00:02:58.455 CC lib/env_dpdk/pci_ioat.o 00:02:58.455 CC lib/env_dpdk/threads.o 00:02:58.455 CC lib/env_dpdk/pci_virtio.o 00:02:58.455 CC lib/env_dpdk/pci_vmd.o 00:02:58.455 CC lib/env_dpdk/pci_idxd.o 00:02:58.455 CC lib/env_dpdk/pci_event.o 00:02:58.455 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:58.455 CC lib/env_dpdk/sigbus_handler.o 00:02:58.455 CC lib/env_dpdk/pci_dpdk.o 00:02:58.455 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:58.455 CC lib/conf/conf.o 00:02:58.455 CC lib/idxd/idxd.o 00:02:58.455 CC lib/idxd/idxd_user.o 00:02:58.455 CC lib/idxd/idxd_kernel.o 00:02:58.455 CC lib/rdma_utils/rdma_utils.o 00:02:58.455 CC lib/vmd/vmd.o 00:02:58.455 CC lib/vmd/led.o 00:02:58.713 LIB libspdk_rdma_provider.a 00:02:58.713 SO libspdk_rdma_provider.so.6.0 00:02:58.713 LIB libspdk_conf.a 00:02:58.713 LIB libspdk_json.a 00:02:58.713 SYMLINK libspdk_rdma_provider.so 00:02:58.713 SO libspdk_conf.so.6.0 00:02:58.713 LIB libspdk_rdma_utils.a 00:02:58.713 SO libspdk_json.so.6.0 00:02:58.971 SO libspdk_rdma_utils.so.1.0 00:02:58.971 SYMLINK libspdk_conf.so 00:02:58.971 SYMLINK libspdk_json.so 00:02:58.971 SYMLINK libspdk_rdma_utils.so 00:02:58.971 LIB libspdk_idxd.a 00:02:58.971 SO libspdk_idxd.so.12.0 00:02:58.971 LIB libspdk_vmd.a 00:02:59.228 SO libspdk_vmd.so.6.0 00:02:59.228 SYMLINK libspdk_idxd.so 00:02:59.228 CC lib/jsonrpc/jsonrpc_server.o 00:02:59.228 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:59.228 CC lib/jsonrpc/jsonrpc_client.o 00:02:59.228 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:59.228 SYMLINK libspdk_vmd.so 00:02:59.495 LIB libspdk_jsonrpc.a 00:02:59.496 SO libspdk_jsonrpc.so.6.0 00:02:59.496 SYMLINK libspdk_jsonrpc.so 00:02:59.496 LIB libspdk_env_dpdk.a 00:02:59.757 SO libspdk_env_dpdk.so.14.1 00:02:59.757 SYMLINK libspdk_env_dpdk.so 00:02:59.757 CC lib/rpc/rpc.o 00:03:00.014 LIB libspdk_rpc.a 00:03:00.014 SO libspdk_rpc.so.6.0 00:03:00.014 SYMLINK libspdk_rpc.so 00:03:00.272 CC lib/notify/notify.o 00:03:00.272 CC lib/notify/notify_rpc.o 00:03:00.272 CC lib/trace/trace.o 00:03:00.272 CC lib/trace/trace_flags.o 00:03:00.272 CC lib/trace/trace_rpc.o 00:03:00.272 CC lib/keyring/keyring.o 00:03:00.272 CC lib/keyring/keyring_rpc.o 00:03:00.531 LIB libspdk_notify.a 00:03:00.531 SO libspdk_notify.so.6.0 00:03:00.531 LIB libspdk_trace.a 00:03:00.531 LIB libspdk_keyring.a 00:03:00.531 SYMLINK libspdk_notify.so 00:03:00.531 SO libspdk_trace.so.10.0 00:03:00.531 SO libspdk_keyring.so.1.0 00:03:00.531 SYMLINK libspdk_trace.so 00:03:00.789 SYMLINK libspdk_keyring.so 00:03:00.789 CC lib/thread/thread.o 00:03:00.789 CC lib/thread/iobuf.o 00:03:01.048 CC lib/sock/sock.o 00:03:01.048 CC lib/sock/sock_rpc.o 00:03:01.307 LIB libspdk_sock.a 00:03:01.307 SO libspdk_sock.so.10.0 00:03:01.307 SYMLINK libspdk_sock.so 00:03:01.566 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:01.566 CC lib/nvme/nvme_fabric.o 00:03:01.566 CC lib/nvme/nvme_ctrlr.o 00:03:01.566 CC lib/nvme/nvme_ns_cmd.o 00:03:01.566 CC lib/nvme/nvme_ns.o 00:03:01.566 CC lib/nvme/nvme_qpair.o 00:03:01.566 CC lib/nvme/nvme_pcie_common.o 00:03:01.566 CC lib/nvme/nvme_pcie.o 00:03:01.566 CC lib/nvme/nvme.o 00:03:01.566 CC lib/nvme/nvme_quirks.o 00:03:01.566 CC lib/nvme/nvme_transport.o 00:03:01.566 CC lib/nvme/nvme_discovery.o 00:03:01.566 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:01.566 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:01.566 CC lib/nvme/nvme_tcp.o 00:03:01.566 CC lib/nvme/nvme_opal.o 00:03:01.566 CC lib/nvme/nvme_io_msg.o 00:03:01.566 CC lib/nvme/nvme_zns.o 00:03:01.566 CC lib/nvme/nvme_poll_group.o 00:03:01.566 CC lib/nvme/nvme_auth.o 00:03:01.566 CC lib/nvme/nvme_stubs.o 00:03:01.566 CC lib/nvme/nvme_cuse.o 00:03:01.566 CC lib/nvme/nvme_rdma.o 00:03:01.825 LIB libspdk_thread.a 00:03:02.084 SO libspdk_thread.so.10.1 00:03:02.084 SYMLINK libspdk_thread.so 00:03:02.341 CC lib/accel/accel_rpc.o 00:03:02.341 CC lib/accel/accel.o 00:03:02.341 CC lib/accel/accel_sw.o 00:03:02.341 CC lib/virtio/virtio.o 00:03:02.341 CC lib/virtio/virtio_vhost_user.o 00:03:02.341 CC lib/virtio/virtio_vfio_user.o 00:03:02.341 CC lib/virtio/virtio_pci.o 00:03:02.341 CC lib/init/json_config.o 00:03:02.341 CC lib/init/subsystem.o 00:03:02.341 CC lib/init/subsystem_rpc.o 00:03:02.341 CC lib/init/rpc.o 00:03:02.341 CC lib/blob/blobstore.o 00:03:02.341 CC lib/blob/request.o 00:03:02.341 CC lib/blob/zeroes.o 00:03:02.341 CC lib/blob/blob_bs_dev.o 00:03:02.599 LIB libspdk_init.a 00:03:02.599 LIB libspdk_virtio.a 00:03:02.599 SO libspdk_init.so.5.0 00:03:02.599 SO libspdk_virtio.so.7.0 00:03:02.599 SYMLINK libspdk_init.so 00:03:02.599 SYMLINK libspdk_virtio.so 00:03:02.858 CC lib/event/app.o 00:03:02.858 CC lib/event/reactor.o 00:03:02.858 CC lib/event/log_rpc.o 00:03:02.858 CC lib/event/scheduler_static.o 00:03:02.858 CC lib/event/app_rpc.o 00:03:02.858 LIB libspdk_accel.a 00:03:03.116 SO libspdk_accel.so.15.1 00:03:03.116 SYMLINK libspdk_accel.so 00:03:03.116 LIB libspdk_event.a 00:03:03.116 LIB libspdk_nvme.a 00:03:03.385 SO libspdk_event.so.14.0 00:03:03.385 SO libspdk_nvme.so.13.1 00:03:03.385 SYMLINK libspdk_event.so 00:03:03.385 CC lib/bdev/bdev.o 00:03:03.385 CC lib/bdev/bdev_rpc.o 00:03:03.385 CC lib/bdev/part.o 00:03:03.385 CC lib/bdev/bdev_zone.o 00:03:03.385 CC lib/bdev/scsi_nvme.o 00:03:03.643 SYMLINK libspdk_nvme.so 00:03:04.575 LIB libspdk_blob.a 00:03:04.575 SO libspdk_blob.so.11.0 00:03:04.575 SYMLINK libspdk_blob.so 00:03:04.831 CC lib/lvol/lvol.o 00:03:04.831 CC lib/blobfs/blobfs.o 00:03:04.831 CC lib/blobfs/tree.o 00:03:05.089 LIB libspdk_bdev.a 00:03:05.089 SO libspdk_bdev.so.15.1 00:03:05.089 SYMLINK libspdk_bdev.so 00:03:05.350 LIB libspdk_blobfs.a 00:03:05.350 SO libspdk_blobfs.so.10.0 00:03:05.350 LIB libspdk_lvol.a 00:03:05.350 SO libspdk_lvol.so.10.0 00:03:05.350 SYMLINK libspdk_blobfs.so 00:03:05.350 SYMLINK libspdk_lvol.so 00:03:05.350 CC lib/nvmf/ctrlr.o 00:03:05.350 CC lib/nvmf/ctrlr_bdev.o 00:03:05.350 CC lib/nvmf/ctrlr_discovery.o 00:03:05.350 CC lib/nvmf/subsystem.o 00:03:05.350 CC lib/nvmf/nvmf.o 00:03:05.350 CC lib/nvmf/tcp.o 00:03:05.350 CC lib/nvmf/nvmf_rpc.o 00:03:05.350 CC lib/nvmf/transport.o 00:03:05.350 CC lib/nvmf/stubs.o 00:03:05.350 CC lib/nvmf/mdns_server.o 00:03:05.350 CC lib/nvmf/rdma.o 00:03:05.350 CC lib/nvmf/auth.o 00:03:05.607 CC lib/ublk/ublk.o 00:03:05.607 CC lib/ublk/ublk_rpc.o 00:03:05.607 CC lib/scsi/dev.o 00:03:05.607 CC lib/scsi/lun.o 00:03:05.607 CC lib/scsi/port.o 00:03:05.607 CC lib/scsi/scsi.o 00:03:05.607 CC lib/nbd/nbd.o 00:03:05.607 CC lib/nbd/nbd_rpc.o 00:03:05.607 CC lib/scsi/scsi_bdev.o 00:03:05.607 CC lib/scsi/scsi_rpc.o 00:03:05.607 CC lib/scsi/task.o 00:03:05.607 CC lib/scsi/scsi_pr.o 00:03:05.607 CC lib/ftl/ftl_core.o 00:03:05.607 CC lib/ftl/ftl_init.o 00:03:05.607 CC lib/ftl/ftl_layout.o 00:03:05.607 CC lib/ftl/ftl_debug.o 00:03:05.607 CC lib/ftl/ftl_io.o 00:03:05.607 CC lib/ftl/ftl_sb.o 00:03:05.607 CC lib/ftl/ftl_l2p.o 00:03:05.607 CC lib/ftl/ftl_nv_cache.o 00:03:05.607 CC lib/ftl/ftl_band.o 00:03:05.607 CC lib/ftl/ftl_l2p_flat.o 00:03:05.607 CC lib/ftl/ftl_band_ops.o 00:03:05.607 CC lib/ftl/ftl_writer.o 00:03:05.607 CC lib/ftl/ftl_rq.o 00:03:05.607 CC lib/ftl/ftl_reloc.o 00:03:05.607 CC lib/ftl/ftl_l2p_cache.o 00:03:05.607 CC lib/ftl/ftl_p2l.o 00:03:05.607 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:05.607 CC lib/ftl/mngt/ftl_mngt.o 00:03:05.607 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:05.607 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:05.607 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:05.607 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:05.607 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:05.607 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:05.607 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:05.607 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:05.607 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:05.607 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:05.607 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:05.607 CC lib/ftl/utils/ftl_conf.o 00:03:05.607 CC lib/ftl/utils/ftl_mempool.o 00:03:05.607 CC lib/ftl/utils/ftl_bitmap.o 00:03:05.607 CC lib/ftl/utils/ftl_md.o 00:03:05.607 CC lib/ftl/utils/ftl_property.o 00:03:05.607 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:05.607 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:05.607 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:05.607 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:05.607 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:05.607 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:05.607 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:05.607 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:05.607 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:05.607 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:05.607 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:05.607 CC lib/ftl/base/ftl_base_dev.o 00:03:05.607 CC lib/ftl/ftl_trace.o 00:03:05.607 CC lib/ftl/base/ftl_base_bdev.o 00:03:06.171 LIB libspdk_nbd.a 00:03:06.171 SO libspdk_nbd.so.7.0 00:03:06.171 SYMLINK libspdk_nbd.so 00:03:06.171 LIB libspdk_ublk.a 00:03:06.171 LIB libspdk_scsi.a 00:03:06.171 SO libspdk_ublk.so.3.0 00:03:06.171 SO libspdk_scsi.so.9.0 00:03:06.171 SYMLINK libspdk_ublk.so 00:03:06.171 SYMLINK libspdk_scsi.so 00:03:06.428 LIB libspdk_ftl.a 00:03:06.428 CC lib/vhost/vhost.o 00:03:06.428 CC lib/vhost/vhost_scsi.o 00:03:06.428 CC lib/vhost/vhost_blk.o 00:03:06.428 CC lib/vhost/vhost_rpc.o 00:03:06.428 CC lib/vhost/rte_vhost_user.o 00:03:06.685 CC lib/iscsi/conn.o 00:03:06.685 CC lib/iscsi/iscsi.o 00:03:06.685 CC lib/iscsi/init_grp.o 00:03:06.685 CC lib/iscsi/md5.o 00:03:06.685 CC lib/iscsi/portal_grp.o 00:03:06.685 CC lib/iscsi/param.o 00:03:06.685 CC lib/iscsi/tgt_node.o 00:03:06.685 CC lib/iscsi/iscsi_subsystem.o 00:03:06.685 CC lib/iscsi/task.o 00:03:06.685 CC lib/iscsi/iscsi_rpc.o 00:03:06.685 SO libspdk_ftl.so.9.0 00:03:06.943 SYMLINK libspdk_ftl.so 00:03:07.201 LIB libspdk_nvmf.a 00:03:07.201 SO libspdk_nvmf.so.18.1 00:03:07.461 LIB libspdk_vhost.a 00:03:07.461 SYMLINK libspdk_nvmf.so 00:03:07.461 SO libspdk_vhost.so.8.0 00:03:07.461 SYMLINK libspdk_vhost.so 00:03:07.461 LIB libspdk_iscsi.a 00:03:07.720 SO libspdk_iscsi.so.8.0 00:03:07.720 SYMLINK libspdk_iscsi.so 00:03:08.296 CC module/env_dpdk/env_dpdk_rpc.o 00:03:08.296 CC module/accel/error/accel_error.o 00:03:08.296 CC module/accel/error/accel_error_rpc.o 00:03:08.296 CC module/sock/posix/posix.o 00:03:08.296 CC module/keyring/file/keyring.o 00:03:08.296 CC module/keyring/file/keyring_rpc.o 00:03:08.296 CC module/accel/ioat/accel_ioat_rpc.o 00:03:08.296 CC module/accel/ioat/accel_ioat.o 00:03:08.296 CC module/keyring/linux/keyring.o 00:03:08.296 LIB libspdk_env_dpdk_rpc.a 00:03:08.296 CC module/keyring/linux/keyring_rpc.o 00:03:08.296 CC module/accel/dsa/accel_dsa.o 00:03:08.296 CC module/accel/dsa/accel_dsa_rpc.o 00:03:08.296 CC module/scheduler/gscheduler/gscheduler.o 00:03:08.296 CC module/blob/bdev/blob_bdev.o 00:03:08.296 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:08.296 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:08.296 CC module/accel/iaa/accel_iaa.o 00:03:08.296 CC module/accel/iaa/accel_iaa_rpc.o 00:03:08.296 SO libspdk_env_dpdk_rpc.so.6.0 00:03:08.296 SYMLINK libspdk_env_dpdk_rpc.so 00:03:08.554 LIB libspdk_keyring_file.a 00:03:08.554 LIB libspdk_keyring_linux.a 00:03:08.554 LIB libspdk_scheduler_gscheduler.a 00:03:08.554 LIB libspdk_accel_ioat.a 00:03:08.554 LIB libspdk_scheduler_dpdk_governor.a 00:03:08.554 LIB libspdk_accel_error.a 00:03:08.554 SO libspdk_keyring_file.so.1.0 00:03:08.554 LIB libspdk_scheduler_dynamic.a 00:03:08.554 SO libspdk_keyring_linux.so.1.0 00:03:08.554 SO libspdk_scheduler_gscheduler.so.4.0 00:03:08.554 SO libspdk_accel_ioat.so.6.0 00:03:08.554 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:08.554 SO libspdk_accel_error.so.2.0 00:03:08.554 LIB libspdk_accel_iaa.a 00:03:08.554 SO libspdk_scheduler_dynamic.so.4.0 00:03:08.554 SYMLINK libspdk_keyring_file.so 00:03:08.554 SYMLINK libspdk_keyring_linux.so 00:03:08.554 LIB libspdk_accel_dsa.a 00:03:08.554 SYMLINK libspdk_scheduler_gscheduler.so 00:03:08.554 LIB libspdk_blob_bdev.a 00:03:08.554 SO libspdk_accel_iaa.so.3.0 00:03:08.554 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:08.554 SYMLINK libspdk_accel_ioat.so 00:03:08.554 SYMLINK libspdk_accel_error.so 00:03:08.554 SO libspdk_blob_bdev.so.11.0 00:03:08.554 SO libspdk_accel_dsa.so.5.0 00:03:08.554 SYMLINK libspdk_scheduler_dynamic.so 00:03:08.554 SYMLINK libspdk_accel_iaa.so 00:03:08.554 SYMLINK libspdk_blob_bdev.so 00:03:08.554 SYMLINK libspdk_accel_dsa.so 00:03:08.813 LIB libspdk_sock_posix.a 00:03:08.813 SO libspdk_sock_posix.so.6.0 00:03:09.071 SYMLINK libspdk_sock_posix.so 00:03:09.071 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:09.071 CC module/blobfs/bdev/blobfs_bdev.o 00:03:09.071 CC module/bdev/ftl/bdev_ftl.o 00:03:09.071 CC module/bdev/raid/bdev_raid.o 00:03:09.071 CC module/bdev/raid/bdev_raid_sb.o 00:03:09.071 CC module/bdev/raid/bdev_raid_rpc.o 00:03:09.072 CC module/bdev/raid/raid0.o 00:03:09.072 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:09.072 CC module/bdev/raid/raid1.o 00:03:09.072 CC module/bdev/delay/vbdev_delay.o 00:03:09.072 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:09.072 CC module/bdev/raid/concat.o 00:03:09.072 CC module/bdev/lvol/vbdev_lvol.o 00:03:09.072 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:09.072 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:09.072 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:09.072 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:09.072 CC module/bdev/split/vbdev_split.o 00:03:09.072 CC module/bdev/error/vbdev_error_rpc.o 00:03:09.072 CC module/bdev/error/vbdev_error.o 00:03:09.072 CC module/bdev/gpt/vbdev_gpt.o 00:03:09.072 CC module/bdev/gpt/gpt.o 00:03:09.072 CC module/bdev/split/vbdev_split_rpc.o 00:03:09.072 CC module/bdev/malloc/bdev_malloc.o 00:03:09.072 CC module/bdev/passthru/vbdev_passthru.o 00:03:09.072 CC module/bdev/null/bdev_null_rpc.o 00:03:09.072 CC module/bdev/null/bdev_null.o 00:03:09.072 CC module/bdev/aio/bdev_aio_rpc.o 00:03:09.072 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:09.072 CC module/bdev/iscsi/bdev_iscsi.o 00:03:09.072 CC module/bdev/aio/bdev_aio.o 00:03:09.072 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:09.072 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:09.072 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:09.072 CC module/bdev/nvme/bdev_nvme.o 00:03:09.072 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:09.072 CC module/bdev/nvme/bdev_mdns_client.o 00:03:09.072 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:09.072 CC module/bdev/nvme/nvme_rpc.o 00:03:09.072 CC module/bdev/nvme/vbdev_opal.o 00:03:09.072 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:09.072 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:09.330 LIB libspdk_blobfs_bdev.a 00:03:09.330 SO libspdk_blobfs_bdev.so.6.0 00:03:09.330 LIB libspdk_bdev_split.a 00:03:09.330 LIB libspdk_bdev_gpt.a 00:03:09.330 SYMLINK libspdk_blobfs_bdev.so 00:03:09.330 LIB libspdk_bdev_error.a 00:03:09.330 SO libspdk_bdev_split.so.6.0 00:03:09.330 SO libspdk_bdev_gpt.so.6.0 00:03:09.330 LIB libspdk_bdev_null.a 00:03:09.330 LIB libspdk_bdev_ftl.a 00:03:09.330 SO libspdk_bdev_error.so.6.0 00:03:09.330 LIB libspdk_bdev_passthru.a 00:03:09.330 SYMLINK libspdk_bdev_split.so 00:03:09.588 SYMLINK libspdk_bdev_gpt.so 00:03:09.588 SO libspdk_bdev_ftl.so.6.0 00:03:09.588 LIB libspdk_bdev_zone_block.a 00:03:09.588 SO libspdk_bdev_null.so.6.0 00:03:09.588 LIB libspdk_bdev_malloc.a 00:03:09.588 LIB libspdk_bdev_aio.a 00:03:09.588 LIB libspdk_bdev_delay.a 00:03:09.588 SO libspdk_bdev_passthru.so.6.0 00:03:09.588 SO libspdk_bdev_zone_block.so.6.0 00:03:09.588 SO libspdk_bdev_malloc.so.6.0 00:03:09.588 SYMLINK libspdk_bdev_error.so 00:03:09.588 SO libspdk_bdev_aio.so.6.0 00:03:09.588 LIB libspdk_bdev_iscsi.a 00:03:09.588 SYMLINK libspdk_bdev_null.so 00:03:09.588 SYMLINK libspdk_bdev_ftl.so 00:03:09.588 SO libspdk_bdev_delay.so.6.0 00:03:09.588 SO libspdk_bdev_iscsi.so.6.0 00:03:09.588 SYMLINK libspdk_bdev_passthru.so 00:03:09.588 SYMLINK libspdk_bdev_malloc.so 00:03:09.588 SYMLINK libspdk_bdev_zone_block.so 00:03:09.588 SYMLINK libspdk_bdev_aio.so 00:03:09.588 LIB libspdk_bdev_virtio.a 00:03:09.588 LIB libspdk_bdev_lvol.a 00:03:09.588 SYMLINK libspdk_bdev_delay.so 00:03:09.588 SO libspdk_bdev_virtio.so.6.0 00:03:09.588 SYMLINK libspdk_bdev_iscsi.so 00:03:09.588 SO libspdk_bdev_lvol.so.6.0 00:03:09.588 SYMLINK libspdk_bdev_virtio.so 00:03:09.588 SYMLINK libspdk_bdev_lvol.so 00:03:09.846 LIB libspdk_bdev_raid.a 00:03:09.846 SO libspdk_bdev_raid.so.6.0 00:03:10.105 SYMLINK libspdk_bdev_raid.so 00:03:10.672 LIB libspdk_bdev_nvme.a 00:03:10.672 SO libspdk_bdev_nvme.so.7.0 00:03:10.672 SYMLINK libspdk_bdev_nvme.so 00:03:11.238 CC module/event/subsystems/sock/sock.o 00:03:11.238 CC module/event/subsystems/vmd/vmd.o 00:03:11.238 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:11.238 CC module/event/subsystems/iobuf/iobuf.o 00:03:11.238 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:11.238 CC module/event/subsystems/keyring/keyring.o 00:03:11.238 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:11.238 CC module/event/subsystems/scheduler/scheduler.o 00:03:11.496 LIB libspdk_event_sock.a 00:03:11.496 LIB libspdk_event_vhost_blk.a 00:03:11.496 LIB libspdk_event_vmd.a 00:03:11.496 SO libspdk_event_sock.so.5.0 00:03:11.496 LIB libspdk_event_keyring.a 00:03:11.496 LIB libspdk_event_scheduler.a 00:03:11.496 LIB libspdk_event_iobuf.a 00:03:11.496 SO libspdk_event_vhost_blk.so.3.0 00:03:11.496 SO libspdk_event_vmd.so.6.0 00:03:11.496 SO libspdk_event_scheduler.so.4.0 00:03:11.496 SO libspdk_event_keyring.so.1.0 00:03:11.496 SO libspdk_event_iobuf.so.3.0 00:03:11.496 SYMLINK libspdk_event_sock.so 00:03:11.496 SYMLINK libspdk_event_vmd.so 00:03:11.496 SYMLINK libspdk_event_vhost_blk.so 00:03:11.496 SYMLINK libspdk_event_scheduler.so 00:03:11.496 SYMLINK libspdk_event_keyring.so 00:03:11.496 SYMLINK libspdk_event_iobuf.so 00:03:11.755 CC module/event/subsystems/accel/accel.o 00:03:12.014 LIB libspdk_event_accel.a 00:03:12.014 SO libspdk_event_accel.so.6.0 00:03:12.014 SYMLINK libspdk_event_accel.so 00:03:12.271 CC module/event/subsystems/bdev/bdev.o 00:03:12.529 LIB libspdk_event_bdev.a 00:03:12.529 SO libspdk_event_bdev.so.6.0 00:03:12.529 SYMLINK libspdk_event_bdev.so 00:03:12.786 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:12.786 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:12.786 CC module/event/subsystems/nbd/nbd.o 00:03:12.786 CC module/event/subsystems/scsi/scsi.o 00:03:12.786 CC module/event/subsystems/ublk/ublk.o 00:03:13.044 LIB libspdk_event_nbd.a 00:03:13.044 LIB libspdk_event_scsi.a 00:03:13.044 LIB libspdk_event_nvmf.a 00:03:13.044 SO libspdk_event_nbd.so.6.0 00:03:13.044 LIB libspdk_event_ublk.a 00:03:13.044 SO libspdk_event_scsi.so.6.0 00:03:13.044 SO libspdk_event_nvmf.so.6.0 00:03:13.044 SO libspdk_event_ublk.so.3.0 00:03:13.044 SYMLINK libspdk_event_nbd.so 00:03:13.044 SYMLINK libspdk_event_scsi.so 00:03:13.044 SYMLINK libspdk_event_nvmf.so 00:03:13.044 SYMLINK libspdk_event_ublk.so 00:03:13.302 CC module/event/subsystems/iscsi/iscsi.o 00:03:13.302 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:13.594 LIB libspdk_event_iscsi.a 00:03:13.594 SO libspdk_event_iscsi.so.6.0 00:03:13.594 LIB libspdk_event_vhost_scsi.a 00:03:13.594 SO libspdk_event_vhost_scsi.so.3.0 00:03:13.594 SYMLINK libspdk_event_iscsi.so 00:03:13.594 SYMLINK libspdk_event_vhost_scsi.so 00:03:13.852 SO libspdk.so.6.0 00:03:13.852 SYMLINK libspdk.so 00:03:14.114 CXX app/trace/trace.o 00:03:14.114 CC app/spdk_top/spdk_top.o 00:03:14.114 CC app/trace_record/trace_record.o 00:03:14.114 CC app/spdk_nvme_perf/perf.o 00:03:14.114 CC app/spdk_nvme_identify/identify.o 00:03:14.114 CC app/spdk_lspci/spdk_lspci.o 00:03:14.114 CC app/spdk_nvme_discover/discovery_aer.o 00:03:14.114 TEST_HEADER include/spdk/accel.h 00:03:14.114 TEST_HEADER include/spdk/accel_module.h 00:03:14.114 CC test/rpc_client/rpc_client_test.o 00:03:14.114 TEST_HEADER include/spdk/barrier.h 00:03:14.114 TEST_HEADER include/spdk/assert.h 00:03:14.114 TEST_HEADER include/spdk/bdev.h 00:03:14.114 TEST_HEADER include/spdk/base64.h 00:03:14.114 TEST_HEADER include/spdk/bdev_module.h 00:03:14.114 TEST_HEADER include/spdk/bit_pool.h 00:03:14.114 TEST_HEADER include/spdk/bdev_zone.h 00:03:14.114 TEST_HEADER include/spdk/blob_bdev.h 00:03:14.114 TEST_HEADER include/spdk/bit_array.h 00:03:14.114 TEST_HEADER include/spdk/blobfs.h 00:03:14.114 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:14.114 TEST_HEADER include/spdk/blob.h 00:03:14.114 TEST_HEADER include/spdk/config.h 00:03:14.114 CC app/spdk_dd/spdk_dd.o 00:03:14.114 TEST_HEADER include/spdk/conf.h 00:03:14.114 TEST_HEADER include/spdk/cpuset.h 00:03:14.114 TEST_HEADER include/spdk/crc16.h 00:03:14.114 TEST_HEADER include/spdk/crc32.h 00:03:14.114 TEST_HEADER include/spdk/crc64.h 00:03:14.114 TEST_HEADER include/spdk/dma.h 00:03:14.114 TEST_HEADER include/spdk/dif.h 00:03:14.114 TEST_HEADER include/spdk/endian.h 00:03:14.114 TEST_HEADER include/spdk/env_dpdk.h 00:03:14.114 TEST_HEADER include/spdk/event.h 00:03:14.114 CC app/nvmf_tgt/nvmf_main.o 00:03:14.114 TEST_HEADER include/spdk/env.h 00:03:14.114 TEST_HEADER include/spdk/fd_group.h 00:03:14.114 TEST_HEADER include/spdk/fd.h 00:03:14.114 TEST_HEADER include/spdk/ftl.h 00:03:14.114 TEST_HEADER include/spdk/file.h 00:03:14.114 TEST_HEADER include/spdk/gpt_spec.h 00:03:14.114 TEST_HEADER include/spdk/hexlify.h 00:03:14.114 TEST_HEADER include/spdk/histogram_data.h 00:03:14.114 TEST_HEADER include/spdk/idxd.h 00:03:14.114 TEST_HEADER include/spdk/idxd_spec.h 00:03:14.114 TEST_HEADER include/spdk/init.h 00:03:14.114 TEST_HEADER include/spdk/ioat.h 00:03:14.114 TEST_HEADER include/spdk/iscsi_spec.h 00:03:14.114 TEST_HEADER include/spdk/ioat_spec.h 00:03:14.114 TEST_HEADER include/spdk/jsonrpc.h 00:03:14.115 TEST_HEADER include/spdk/json.h 00:03:14.115 TEST_HEADER include/spdk/keyring_module.h 00:03:14.115 TEST_HEADER include/spdk/likely.h 00:03:14.115 TEST_HEADER include/spdk/keyring.h 00:03:14.115 TEST_HEADER include/spdk/lvol.h 00:03:14.115 TEST_HEADER include/spdk/log.h 00:03:14.115 TEST_HEADER include/spdk/mmio.h 00:03:14.115 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:14.115 TEST_HEADER include/spdk/nbd.h 00:03:14.115 TEST_HEADER include/spdk/memory.h 00:03:14.115 TEST_HEADER include/spdk/notify.h 00:03:14.115 TEST_HEADER include/spdk/nvme_intel.h 00:03:14.115 TEST_HEADER include/spdk/nvme.h 00:03:14.115 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:14.115 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:14.115 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:14.115 TEST_HEADER include/spdk/nvme_spec.h 00:03:14.115 TEST_HEADER include/spdk/nvme_zns.h 00:03:14.115 TEST_HEADER include/spdk/nvmf_spec.h 00:03:14.115 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:14.115 TEST_HEADER include/spdk/nvmf.h 00:03:14.115 TEST_HEADER include/spdk/nvmf_transport.h 00:03:14.115 TEST_HEADER include/spdk/opal_spec.h 00:03:14.115 TEST_HEADER include/spdk/pci_ids.h 00:03:14.115 TEST_HEADER include/spdk/opal.h 00:03:14.115 TEST_HEADER include/spdk/pipe.h 00:03:14.115 CC app/iscsi_tgt/iscsi_tgt.o 00:03:14.115 TEST_HEADER include/spdk/rpc.h 00:03:14.115 TEST_HEADER include/spdk/reduce.h 00:03:14.115 TEST_HEADER include/spdk/scheduler.h 00:03:14.115 TEST_HEADER include/spdk/queue.h 00:03:14.115 TEST_HEADER include/spdk/scsi.h 00:03:14.115 TEST_HEADER include/spdk/stdinc.h 00:03:14.115 TEST_HEADER include/spdk/scsi_spec.h 00:03:14.115 TEST_HEADER include/spdk/sock.h 00:03:14.115 TEST_HEADER include/spdk/thread.h 00:03:14.115 TEST_HEADER include/spdk/trace.h 00:03:14.115 TEST_HEADER include/spdk/trace_parser.h 00:03:14.115 TEST_HEADER include/spdk/string.h 00:03:14.115 TEST_HEADER include/spdk/tree.h 00:03:14.115 TEST_HEADER include/spdk/ublk.h 00:03:14.115 TEST_HEADER include/spdk/uuid.h 00:03:14.115 TEST_HEADER include/spdk/util.h 00:03:14.115 TEST_HEADER include/spdk/version.h 00:03:14.115 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:14.115 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:14.115 TEST_HEADER include/spdk/vhost.h 00:03:14.115 CC app/spdk_tgt/spdk_tgt.o 00:03:14.115 TEST_HEADER include/spdk/vmd.h 00:03:14.115 TEST_HEADER include/spdk/xor.h 00:03:14.115 TEST_HEADER include/spdk/zipf.h 00:03:14.115 CXX test/cpp_headers/accel_module.o 00:03:14.115 CXX test/cpp_headers/assert.o 00:03:14.115 CXX test/cpp_headers/accel.o 00:03:14.115 CXX test/cpp_headers/barrier.o 00:03:14.115 CXX test/cpp_headers/base64.o 00:03:14.115 CXX test/cpp_headers/bdev.o 00:03:14.115 CXX test/cpp_headers/bdev_module.o 00:03:14.115 CXX test/cpp_headers/bit_pool.o 00:03:14.115 CXX test/cpp_headers/bdev_zone.o 00:03:14.115 CXX test/cpp_headers/blob_bdev.o 00:03:14.115 CXX test/cpp_headers/bit_array.o 00:03:14.115 CXX test/cpp_headers/blob.o 00:03:14.115 CXX test/cpp_headers/config.o 00:03:14.115 CXX test/cpp_headers/blobfs_bdev.o 00:03:14.115 CXX test/cpp_headers/cpuset.o 00:03:14.115 CXX test/cpp_headers/crc16.o 00:03:14.115 CXX test/cpp_headers/blobfs.o 00:03:14.115 CXX test/cpp_headers/crc32.o 00:03:14.115 CXX test/cpp_headers/conf.o 00:03:14.115 CXX test/cpp_headers/dif.o 00:03:14.115 CXX test/cpp_headers/dma.o 00:03:14.115 CXX test/cpp_headers/crc64.o 00:03:14.115 CXX test/cpp_headers/env_dpdk.o 00:03:14.115 CXX test/cpp_headers/env.o 00:03:14.115 CXX test/cpp_headers/event.o 00:03:14.115 CXX test/cpp_headers/endian.o 00:03:14.115 CXX test/cpp_headers/file.o 00:03:14.115 CXX test/cpp_headers/fd.o 00:03:14.115 CXX test/cpp_headers/fd_group.o 00:03:14.115 CXX test/cpp_headers/ftl.o 00:03:14.115 CXX test/cpp_headers/gpt_spec.o 00:03:14.115 CXX test/cpp_headers/hexlify.o 00:03:14.115 CXX test/cpp_headers/idxd.o 00:03:14.115 CXX test/cpp_headers/histogram_data.o 00:03:14.115 CXX test/cpp_headers/ioat.o 00:03:14.115 CXX test/cpp_headers/idxd_spec.o 00:03:14.115 CXX test/cpp_headers/init.o 00:03:14.115 CXX test/cpp_headers/iscsi_spec.o 00:03:14.115 CXX test/cpp_headers/ioat_spec.o 00:03:14.115 CXX test/cpp_headers/keyring.o 00:03:14.115 CXX test/cpp_headers/json.o 00:03:14.115 CXX test/cpp_headers/keyring_module.o 00:03:14.115 CXX test/cpp_headers/jsonrpc.o 00:03:14.115 CXX test/cpp_headers/lvol.o 00:03:14.115 CXX test/cpp_headers/log.o 00:03:14.115 CXX test/cpp_headers/likely.o 00:03:14.115 CXX test/cpp_headers/memory.o 00:03:14.115 CXX test/cpp_headers/mmio.o 00:03:14.115 CXX test/cpp_headers/nbd.o 00:03:14.115 CXX test/cpp_headers/notify.o 00:03:14.115 CXX test/cpp_headers/nvme.o 00:03:14.115 CXX test/cpp_headers/nvme_intel.o 00:03:14.115 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:14.115 CXX test/cpp_headers/nvme_spec.o 00:03:14.115 CXX test/cpp_headers/nvme_ocssd.o 00:03:14.115 CXX test/cpp_headers/nvme_zns.o 00:03:14.115 CXX test/cpp_headers/nvmf_cmd.o 00:03:14.115 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:14.115 CXX test/cpp_headers/nvmf.o 00:03:14.115 CXX test/cpp_headers/nvmf_transport.o 00:03:14.115 CXX test/cpp_headers/nvmf_spec.o 00:03:14.115 CXX test/cpp_headers/opal.o 00:03:14.115 CXX test/cpp_headers/opal_spec.o 00:03:14.115 CXX test/cpp_headers/pci_ids.o 00:03:14.115 CXX test/cpp_headers/pipe.o 00:03:14.115 CXX test/cpp_headers/queue.o 00:03:14.115 CC app/fio/nvme/fio_plugin.o 00:03:14.115 CXX test/cpp_headers/reduce.o 00:03:14.115 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:14.378 CC test/env/pci/pci_ut.o 00:03:14.379 CC examples/ioat/perf/perf.o 00:03:14.379 CXX test/cpp_headers/rpc.o 00:03:14.379 CC test/thread/poller_perf/poller_perf.o 00:03:14.379 CC examples/util/zipf/zipf.o 00:03:14.379 CC test/dma/test_dma/test_dma.o 00:03:14.379 CC test/app/histogram_perf/histogram_perf.o 00:03:14.379 CC test/env/memory/memory_ut.o 00:03:14.379 CC test/env/vtophys/vtophys.o 00:03:14.379 CC test/app/jsoncat/jsoncat.o 00:03:14.379 CC app/fio/bdev/fio_plugin.o 00:03:14.379 CC test/app/bdev_svc/bdev_svc.o 00:03:14.379 CC examples/ioat/verify/verify.o 00:03:14.379 CC test/app/stub/stub.o 00:03:14.379 LINK rpc_client_test 00:03:14.379 LINK spdk_lspci 00:03:14.379 LINK nvmf_tgt 00:03:14.641 CC test/env/mem_callbacks/mem_callbacks.o 00:03:14.641 LINK interrupt_tgt 00:03:14.641 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:14.641 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:14.641 LINK poller_perf 00:03:14.641 LINK spdk_nvme_discover 00:03:14.641 LINK spdk_tgt 00:03:14.641 LINK zipf 00:03:14.641 LINK env_dpdk_post_init 00:03:14.641 CXX test/cpp_headers/scheduler.o 00:03:14.641 LINK jsoncat 00:03:14.641 CXX test/cpp_headers/scsi.o 00:03:14.641 CXX test/cpp_headers/scsi_spec.o 00:03:14.641 CXX test/cpp_headers/sock.o 00:03:14.641 CXX test/cpp_headers/stdinc.o 00:03:14.641 CXX test/cpp_headers/string.o 00:03:14.641 CXX test/cpp_headers/thread.o 00:03:14.641 CXX test/cpp_headers/trace.o 00:03:14.641 CXX test/cpp_headers/trace_parser.o 00:03:14.641 CXX test/cpp_headers/tree.o 00:03:14.641 CXX test/cpp_headers/ublk.o 00:03:14.898 CXX test/cpp_headers/uuid.o 00:03:14.898 CXX test/cpp_headers/version.o 00:03:14.898 CXX test/cpp_headers/vfio_user_pci.o 00:03:14.898 LINK spdk_trace_record 00:03:14.898 CXX test/cpp_headers/vfio_user_spec.o 00:03:14.898 CXX test/cpp_headers/vhost.o 00:03:14.898 CXX test/cpp_headers/vmd.o 00:03:14.898 CXX test/cpp_headers/util.o 00:03:14.898 CXX test/cpp_headers/xor.o 00:03:14.898 CXX test/cpp_headers/zipf.o 00:03:14.898 LINK spdk_dd 00:03:14.898 LINK bdev_svc 00:03:14.898 LINK iscsi_tgt 00:03:14.898 LINK verify 00:03:14.898 LINK histogram_perf 00:03:14.898 LINK vtophys 00:03:14.898 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:14.898 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:14.898 LINK stub 00:03:14.898 LINK test_dma 00:03:14.898 LINK ioat_perf 00:03:15.154 LINK spdk_trace 00:03:15.154 LINK pci_ut 00:03:15.154 CC examples/sock/hello_world/hello_sock.o 00:03:15.154 CC examples/idxd/perf/perf.o 00:03:15.154 CC examples/vmd/led/led.o 00:03:15.154 LINK spdk_nvme 00:03:15.154 CC examples/vmd/lsvmd/lsvmd.o 00:03:15.154 LINK spdk_nvme_identify 00:03:15.154 CC examples/thread/thread/thread_ex.o 00:03:15.154 LINK spdk_bdev 00:03:15.154 LINK nvme_fuzz 00:03:15.154 CC test/event/reactor_perf/reactor_perf.o 00:03:15.154 CC test/event/app_repeat/app_repeat.o 00:03:15.154 CC test/event/reactor/reactor.o 00:03:15.154 CC test/event/event_perf/event_perf.o 00:03:15.412 CC test/event/scheduler/scheduler.o 00:03:15.412 LINK mem_callbacks 00:03:15.412 LINK lsvmd 00:03:15.412 LINK led 00:03:15.412 LINK vhost_fuzz 00:03:15.412 LINK hello_sock 00:03:15.412 CC app/vhost/vhost.o 00:03:15.412 LINK reactor 00:03:15.412 LINK reactor_perf 00:03:15.412 CC test/nvme/simple_copy/simple_copy.o 00:03:15.412 CC test/nvme/overhead/overhead.o 00:03:15.412 CC test/nvme/startup/startup.o 00:03:15.412 CC test/nvme/reserve/reserve.o 00:03:15.412 CC test/nvme/sgl/sgl.o 00:03:15.412 CC test/nvme/fused_ordering/fused_ordering.o 00:03:15.412 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:15.412 CC test/nvme/e2edp/nvme_dp.o 00:03:15.412 CC test/nvme/boot_partition/boot_partition.o 00:03:15.412 LINK app_repeat 00:03:15.412 CC test/nvme/reset/reset.o 00:03:15.412 CC test/nvme/compliance/nvme_compliance.o 00:03:15.412 LINK event_perf 00:03:15.412 CC test/nvme/connect_stress/connect_stress.o 00:03:15.412 CC test/nvme/fdp/fdp.o 00:03:15.412 CC test/nvme/cuse/cuse.o 00:03:15.412 CC test/nvme/err_injection/err_injection.o 00:03:15.412 CC test/nvme/aer/aer.o 00:03:15.412 LINK spdk_nvme_perf 00:03:15.412 CC test/blobfs/mkfs/mkfs.o 00:03:15.412 CC test/accel/dif/dif.o 00:03:15.412 LINK thread 00:03:15.412 LINK idxd_perf 00:03:15.412 LINK spdk_top 00:03:15.670 LINK scheduler 00:03:15.670 CC test/lvol/esnap/esnap.o 00:03:15.670 LINK memory_ut 00:03:15.670 LINK vhost 00:03:15.670 LINK boot_partition 00:03:15.670 LINK startup 00:03:15.670 LINK fused_ordering 00:03:15.670 LINK doorbell_aers 00:03:15.670 LINK connect_stress 00:03:15.670 LINK simple_copy 00:03:15.670 LINK err_injection 00:03:15.670 LINK mkfs 00:03:15.670 LINK reserve 00:03:15.670 LINK reset 00:03:15.670 LINK sgl 00:03:15.670 LINK aer 00:03:15.670 LINK nvme_dp 00:03:15.670 LINK overhead 00:03:15.670 LINK nvme_compliance 00:03:15.670 LINK fdp 00:03:15.927 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:15.927 CC examples/nvme/hotplug/hotplug.o 00:03:15.927 CC examples/nvme/reconnect/reconnect.o 00:03:15.927 CC examples/nvme/hello_world/hello_world.o 00:03:15.927 CC examples/nvme/abort/abort.o 00:03:15.927 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:15.927 CC examples/nvme/arbitration/arbitration.o 00:03:15.927 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:15.927 LINK dif 00:03:15.927 CC examples/accel/perf/accel_perf.o 00:03:15.927 CC examples/blob/cli/blobcli.o 00:03:15.927 CC examples/blob/hello_world/hello_blob.o 00:03:15.927 LINK pmr_persistence 00:03:15.927 LINK cmb_copy 00:03:15.927 LINK hotplug 00:03:15.927 LINK hello_world 00:03:16.184 LINK arbitration 00:03:16.184 LINK reconnect 00:03:16.184 LINK abort 00:03:16.184 LINK hello_blob 00:03:16.184 LINK iscsi_fuzz 00:03:16.184 LINK nvme_manage 00:03:16.184 LINK accel_perf 00:03:16.441 LINK blobcli 00:03:16.441 CC test/bdev/bdevio/bdevio.o 00:03:16.441 LINK cuse 00:03:16.699 LINK bdevio 00:03:16.699 CC examples/bdev/hello_world/hello_bdev.o 00:03:16.699 CC examples/bdev/bdevperf/bdevperf.o 00:03:16.957 LINK hello_bdev 00:03:17.215 LINK bdevperf 00:03:17.782 CC examples/nvmf/nvmf/nvmf.o 00:03:18.041 LINK nvmf 00:03:18.978 LINK esnap 00:03:19.545 00:03:19.545 real 0m43.469s 00:03:19.545 user 6m35.908s 00:03:19.545 sys 3m20.469s 00:03:19.545 14:37:53 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:19.545 14:37:53 make -- common/autotest_common.sh@10 -- $ set +x 00:03:19.545 ************************************ 00:03:19.545 END TEST make 00:03:19.545 ************************************ 00:03:19.545 14:37:53 -- common/autotest_common.sh@1142 -- $ return 0 00:03:19.545 14:37:53 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:19.545 14:37:53 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:19.545 14:37:53 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:19.545 14:37:53 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:19.545 14:37:53 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:19.545 14:37:53 -- pm/common@44 -- $ pid=2566811 00:03:19.545 14:37:53 -- pm/common@50 -- $ kill -TERM 2566811 00:03:19.545 14:37:53 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:19.545 14:37:53 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:19.545 14:37:53 -- pm/common@44 -- $ pid=2566813 00:03:19.545 14:37:53 -- pm/common@50 -- $ kill -TERM 2566813 00:03:19.545 14:37:53 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:19.545 14:37:53 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:19.545 14:37:53 -- pm/common@44 -- $ pid=2566815 00:03:19.545 14:37:53 -- pm/common@50 -- $ kill -TERM 2566815 00:03:19.545 14:37:53 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:19.545 14:37:53 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:19.545 14:37:53 -- pm/common@44 -- $ pid=2566837 00:03:19.545 14:37:53 -- pm/common@50 -- $ sudo -E kill -TERM 2566837 00:03:19.545 14:37:53 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:03:19.545 14:37:53 -- nvmf/common.sh@7 -- # uname -s 00:03:19.545 14:37:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:19.545 14:37:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:19.545 14:37:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:19.545 14:37:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:19.545 14:37:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:19.545 14:37:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:19.545 14:37:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:19.545 14:37:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:19.545 14:37:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:19.545 14:37:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:19.545 14:37:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:03:19.545 14:37:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:03:19.545 14:37:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:19.545 14:37:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:19.545 14:37:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:19.545 14:37:53 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:19.545 14:37:53 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:03:19.545 14:37:53 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:19.545 14:37:53 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:19.545 14:37:53 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:19.545 14:37:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:19.545 14:37:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:19.546 14:37:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:19.546 14:37:53 -- paths/export.sh@5 -- # export PATH 00:03:19.546 14:37:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:19.546 14:37:53 -- nvmf/common.sh@47 -- # : 0 00:03:19.546 14:37:53 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:19.546 14:37:53 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:19.546 14:37:53 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:19.546 14:37:53 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:19.546 14:37:53 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:19.546 14:37:53 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:19.546 14:37:53 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:19.546 14:37:53 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:19.546 14:37:53 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:19.546 14:37:53 -- spdk/autotest.sh@32 -- # uname -s 00:03:19.546 14:37:53 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:19.546 14:37:53 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:19.546 14:37:53 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:03:19.546 14:37:53 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:19.546 14:37:53 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:03:19.546 14:37:53 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:19.546 14:37:53 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:19.546 14:37:53 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:19.546 14:37:53 -- spdk/autotest.sh@48 -- # udevadm_pid=2625309 00:03:19.546 14:37:53 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:19.546 14:37:53 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:19.546 14:37:53 -- pm/common@17 -- # local monitor 00:03:19.546 14:37:53 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:19.546 14:37:53 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:19.546 14:37:53 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:19.546 14:37:53 -- pm/common@21 -- # date +%s 00:03:19.546 14:37:53 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:19.546 14:37:53 -- pm/common@21 -- # date +%s 00:03:19.546 14:37:53 -- pm/common@25 -- # sleep 1 00:03:19.546 14:37:53 -- pm/common@21 -- # date +%s 00:03:19.546 14:37:53 -- pm/common@21 -- # date +%s 00:03:19.546 14:37:53 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721047073 00:03:19.546 14:37:53 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721047073 00:03:19.546 14:37:53 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721047073 00:03:19.546 14:37:53 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721047073 00:03:19.546 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721047073_collect-vmstat.pm.log 00:03:19.546 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721047073_collect-cpu-load.pm.log 00:03:19.546 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721047073_collect-cpu-temp.pm.log 00:03:19.546 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721047073_collect-bmc-pm.bmc.pm.log 00:03:20.481 14:37:54 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:20.481 14:37:54 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:20.481 14:37:54 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:20.481 14:37:54 -- common/autotest_common.sh@10 -- # set +x 00:03:20.481 14:37:54 -- spdk/autotest.sh@59 -- # create_test_list 00:03:20.481 14:37:54 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:20.481 14:37:54 -- common/autotest_common.sh@10 -- # set +x 00:03:20.740 14:37:54 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/autotest.sh 00:03:20.740 14:37:54 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:03:20.740 14:37:54 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:03:20.740 14:37:54 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:03:20.740 14:37:54 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:03:20.740 14:37:54 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:20.740 14:37:54 -- common/autotest_common.sh@1455 -- # uname 00:03:20.740 14:37:54 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:20.740 14:37:54 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:20.740 14:37:54 -- common/autotest_common.sh@1475 -- # uname 00:03:20.740 14:37:54 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:20.740 14:37:54 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:20.740 14:37:54 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:20.740 14:37:54 -- spdk/autotest.sh@72 -- # hash lcov 00:03:20.740 14:37:54 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:20.740 14:37:54 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:20.740 --rc lcov_branch_coverage=1 00:03:20.740 --rc lcov_function_coverage=1 00:03:20.740 --rc genhtml_branch_coverage=1 00:03:20.740 --rc genhtml_function_coverage=1 00:03:20.740 --rc genhtml_legend=1 00:03:20.740 --rc geninfo_all_blocks=1 00:03:20.740 ' 00:03:20.740 14:37:54 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:20.740 --rc lcov_branch_coverage=1 00:03:20.740 --rc lcov_function_coverage=1 00:03:20.740 --rc genhtml_branch_coverage=1 00:03:20.740 --rc genhtml_function_coverage=1 00:03:20.740 --rc genhtml_legend=1 00:03:20.740 --rc geninfo_all_blocks=1 00:03:20.740 ' 00:03:20.740 14:37:54 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:20.740 --rc lcov_branch_coverage=1 00:03:20.740 --rc lcov_function_coverage=1 00:03:20.740 --rc genhtml_branch_coverage=1 00:03:20.740 --rc genhtml_function_coverage=1 00:03:20.740 --rc genhtml_legend=1 00:03:20.740 --rc geninfo_all_blocks=1 00:03:20.740 --no-external' 00:03:20.740 14:37:54 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:20.740 --rc lcov_branch_coverage=1 00:03:20.740 --rc lcov_function_coverage=1 00:03:20.740 --rc genhtml_branch_coverage=1 00:03:20.740 --rc genhtml_function_coverage=1 00:03:20.740 --rc genhtml_legend=1 00:03:20.740 --rc geninfo_all_blocks=1 00:03:20.740 --no-external' 00:03:20.740 14:37:54 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:20.740 lcov: LCOV version 1.14 00:03:20.740 14:37:54 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info 00:03:24.931 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:24.931 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:24.931 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:24.931 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:24.931 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:24.931 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:24.931 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:24.931 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:24.931 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:24.931 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:24.931 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:24.931 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:24.931 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:24.931 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:24.931 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:24.931 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:24.931 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:24.931 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:24.931 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:24.931 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:24.931 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:24.931 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:24.931 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:24.931 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:24.931 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:24.931 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:24.931 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:24.931 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:24.931 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:24.931 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:24.931 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:24.931 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:24.931 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:24.931 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:24.931 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:24.931 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:24.931 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:24.931 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:24.931 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:24.931 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:24.931 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:24.931 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:24.931 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:24.931 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:24.931 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:24.931 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:24.931 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:24.931 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:24.931 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:24.931 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:24.931 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:24.931 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:24.931 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:24.931 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:24.931 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:24.931 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:24.931 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:24.931 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:24.931 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:24.931 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:24.931 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:24.931 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:24.931 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:24.931 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:24.931 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:24.931 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:24.931 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:24.931 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:24.931 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:24.931 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:24.931 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:24.931 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:24.931 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:24.931 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:24.931 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:24.931 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:24.931 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:24.931 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:24.931 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:24.931 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:03:24.931 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:24.931 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:24.931 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:24.931 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:03:24.931 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:24.931 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:24.931 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:24.931 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:24.931 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:24.931 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:24.931 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:24.931 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:24.931 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:24.931 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:24.931 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:24.931 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:24.931 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:24.931 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:24.931 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:24.931 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:24.931 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:24.932 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:24.932 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:24.932 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:24.932 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:24.932 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:24.932 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:24.932 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:24.932 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:24.932 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:24.932 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:24.932 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:24.932 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:24.932 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:24.932 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:24.932 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:24.932 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:24.932 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:24.932 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:24.932 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:24.932 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:24.932 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:24.932 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:24.932 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:24.932 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:24.932 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:24.932 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:24.932 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:24.932 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:24.932 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:24.932 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:24.932 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:24.932 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:24.932 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:24.932 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:24.932 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:24.932 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:24.932 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:24.932 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:24.932 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:24.932 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:24.932 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:24.932 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:24.932 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:24.932 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:24.932 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:24.932 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:24.932 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:24.932 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:24.932 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:24.932 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:24.932 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:24.932 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:24.932 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:24.932 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:24.932 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:24.932 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:24.932 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:24.932 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:24.932 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:24.932 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:24.932 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:24.932 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:24.932 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:24.932 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:24.932 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:24.932 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:24.932 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:24.932 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:24.932 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:24.932 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:24.932 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:24.932 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:24.932 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:24.932 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:24.932 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:39.792 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:39.792 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:45.147 14:38:18 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:45.147 14:38:18 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:45.147 14:38:18 -- common/autotest_common.sh@10 -- # set +x 00:03:45.147 14:38:18 -- spdk/autotest.sh@91 -- # rm -f 00:03:45.147 14:38:18 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:46.522 0000:5f:00.0 (8086 0a54): Already using the nvme driver 00:03:46.522 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:46.522 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:46.522 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:46.780 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:46.780 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:46.780 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:46.780 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:46.780 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:46.780 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:46.780 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:46.780 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:46.780 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:46.780 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:46.780 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:46.780 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:46.780 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:47.037 14:38:20 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:47.037 14:38:20 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:47.037 14:38:20 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:47.037 14:38:20 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:47.037 14:38:20 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:47.037 14:38:20 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:47.037 14:38:20 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:47.037 14:38:20 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:47.037 14:38:20 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:47.037 14:38:20 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:47.037 14:38:20 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:47.037 14:38:20 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:47.037 14:38:20 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:47.037 14:38:20 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:47.037 14:38:20 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:47.037 No valid GPT data, bailing 00:03:47.037 14:38:20 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:47.037 14:38:20 -- scripts/common.sh@391 -- # pt= 00:03:47.037 14:38:20 -- scripts/common.sh@392 -- # return 1 00:03:47.037 14:38:20 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:47.037 1+0 records in 00:03:47.037 1+0 records out 00:03:47.037 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0040267 s, 260 MB/s 00:03:47.037 14:38:20 -- spdk/autotest.sh@118 -- # sync 00:03:47.037 14:38:20 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:47.037 14:38:20 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:47.037 14:38:20 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:52.299 14:38:25 -- spdk/autotest.sh@124 -- # uname -s 00:03:52.299 14:38:25 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:52.299 14:38:25 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/test-setup.sh 00:03:52.299 14:38:25 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:52.299 14:38:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:52.299 14:38:25 -- common/autotest_common.sh@10 -- # set +x 00:03:52.299 ************************************ 00:03:52.299 START TEST setup.sh 00:03:52.299 ************************************ 00:03:52.299 14:38:25 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/test-setup.sh 00:03:52.299 * Looking for test storage... 00:03:52.299 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:03:52.299 14:38:25 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:52.299 14:38:25 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:52.299 14:38:25 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/acl.sh 00:03:52.299 14:38:25 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:52.299 14:38:25 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:52.299 14:38:25 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:52.299 ************************************ 00:03:52.299 START TEST acl 00:03:52.299 ************************************ 00:03:52.299 14:38:25 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/acl.sh 00:03:52.299 * Looking for test storage... 00:03:52.299 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:03:52.299 14:38:26 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:52.299 14:38:26 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:52.299 14:38:26 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:52.299 14:38:26 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:52.299 14:38:26 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:52.299 14:38:26 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:52.299 14:38:26 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:52.299 14:38:26 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:52.299 14:38:26 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:52.299 14:38:26 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:52.299 14:38:26 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:52.299 14:38:26 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:52.299 14:38:26 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:52.299 14:38:26 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:52.299 14:38:26 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:52.299 14:38:26 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:55.579 14:38:29 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:55.579 14:38:29 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:55.579 14:38:29 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:55.579 14:38:29 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:55.579 14:38:29 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:55.579 14:38:29 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:03:58.111 Hugepages 00:03:58.111 node hugesize free / total 00:03:58.111 14:38:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:58.111 14:38:31 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:58.111 14:38:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.111 14:38:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:58.111 14:38:31 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:58.111 14:38:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.111 14:38:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:58.111 14:38:31 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:58.111 14:38:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.111 00:03:58.111 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:58.111 14:38:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:58.111 14:38:31 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:58.111 14:38:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.111 14:38:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:58.111 14:38:31 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:58.111 14:38:31 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:58.111 14:38:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.111 14:38:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:58.111 14:38:31 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:58.111 14:38:31 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:58.111 14:38:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.111 14:38:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:58.111 14:38:31 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:58.111 14:38:31 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:58.111 14:38:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.111 14:38:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:58.111 14:38:31 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:58.111 14:38:31 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:58.111 14:38:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.111 14:38:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:58.111 14:38:31 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:58.111 14:38:31 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:58.111 14:38:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.111 14:38:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:58.111 14:38:31 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:58.111 14:38:31 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:58.111 14:38:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.111 14:38:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:58.111 14:38:31 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:58.111 14:38:31 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:58.111 14:38:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.111 14:38:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:58.111 14:38:31 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:58.111 14:38:31 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:58.112 14:38:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.112 14:38:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:5f:00.0 == *:*:*.* ]] 00:03:58.112 14:38:31 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:58.112 14:38:31 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\5\f\:\0\0\.\0* ]] 00:03:58.112 14:38:31 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:58.112 14:38:31 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:58.112 14:38:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.112 14:38:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:58.112 14:38:31 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:58.112 14:38:31 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:58.112 14:38:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.112 14:38:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:58.112 14:38:31 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:58.112 14:38:31 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:58.112 14:38:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.112 14:38:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:58.112 14:38:31 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:58.112 14:38:31 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:58.112 14:38:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.112 14:38:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:58.112 14:38:31 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:58.112 14:38:31 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:58.112 14:38:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.112 14:38:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:58.112 14:38:31 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:58.112 14:38:31 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:58.112 14:38:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.112 14:38:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:58.112 14:38:31 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:58.112 14:38:31 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:58.112 14:38:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.112 14:38:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:58.112 14:38:31 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:58.112 14:38:31 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:58.112 14:38:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.112 14:38:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:58.112 14:38:31 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:58.112 14:38:31 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:58.112 14:38:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.112 14:38:31 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:58.112 14:38:31 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:58.112 14:38:31 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:58.112 14:38:31 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:58.112 14:38:31 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:58.112 ************************************ 00:03:58.112 START TEST denied 00:03:58.112 ************************************ 00:03:58.112 14:38:31 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:03:58.112 14:38:31 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:5f:00.0' 00:03:58.112 14:38:31 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:58.112 14:38:31 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:5f:00.0' 00:03:58.112 14:38:31 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:58.112 14:38:31 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:01.393 0000:5f:00.0 (8086 0a54): Skipping denied controller at 0000:5f:00.0 00:04:01.393 14:38:34 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:5f:00.0 00:04:01.393 14:38:34 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:01.393 14:38:34 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:01.393 14:38:34 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:5f:00.0 ]] 00:04:01.393 14:38:34 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:5f:00.0/driver 00:04:01.393 14:38:34 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:01.393 14:38:34 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:01.393 14:38:34 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:01.393 14:38:34 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:01.393 14:38:34 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:05.582 00:04:05.582 real 0m6.898s 00:04:05.582 user 0m2.289s 00:04:05.582 sys 0m3.943s 00:04:05.582 14:38:38 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:05.582 14:38:38 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:05.582 ************************************ 00:04:05.582 END TEST denied 00:04:05.582 ************************************ 00:04:05.582 14:38:38 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:05.582 14:38:38 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:05.582 14:38:38 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:05.582 14:38:38 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:05.582 14:38:38 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:05.582 ************************************ 00:04:05.582 START TEST allowed 00:04:05.582 ************************************ 00:04:05.582 14:38:38 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:04:05.582 14:38:38 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:5f:00.0 00:04:05.582 14:38:38 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:05.582 14:38:38 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:5f:00.0 .*: nvme -> .*' 00:04:05.582 14:38:38 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:05.582 14:38:38 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:09.784 0000:5f:00.0 (8086 0a54): nvme -> vfio-pci 00:04:09.784 14:38:42 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:04:09.784 14:38:42 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:09.784 14:38:42 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:09.784 14:38:42 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:09.784 14:38:42 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:12.312 00:04:12.312 real 0m6.895s 00:04:12.312 user 0m1.841s 00:04:12.312 sys 0m3.549s 00:04:12.312 14:38:45 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:12.312 14:38:45 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:12.312 ************************************ 00:04:12.312 END TEST allowed 00:04:12.312 ************************************ 00:04:12.312 14:38:45 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:12.312 00:04:12.312 real 0m19.873s 00:04:12.312 user 0m6.439s 00:04:12.312 sys 0m11.483s 00:04:12.312 14:38:45 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:12.312 14:38:45 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:12.312 ************************************ 00:04:12.312 END TEST acl 00:04:12.312 ************************************ 00:04:12.312 14:38:45 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:12.312 14:38:45 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/hugepages.sh 00:04:12.312 14:38:45 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:12.312 14:38:45 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:12.312 14:38:45 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:12.312 ************************************ 00:04:12.312 START TEST hugepages 00:04:12.312 ************************************ 00:04:12.312 14:38:45 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/hugepages.sh 00:04:12.312 * Looking for test storage... 00:04:12.312 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:04:12.312 14:38:46 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:12.312 14:38:46 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:12.312 14:38:46 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:12.312 14:38:46 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:12.312 14:38:46 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:12.312 14:38:46 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:12.312 14:38:46 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:12.312 14:38:46 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:12.312 14:38:46 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:12.312 14:38:46 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:12.312 14:38:46 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.312 14:38:46 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.312 14:38:46 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.312 14:38:46 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.312 14:38:46 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.312 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.312 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.312 14:38:46 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 173221540 kB' 'MemAvailable: 176144828 kB' 'Buffers: 4132 kB' 'Cached: 9942860 kB' 'SwapCached: 0 kB' 'Active: 7044412 kB' 'Inactive: 3521696 kB' 'Active(anon): 6614564 kB' 'Inactive(anon): 0 kB' 'Active(file): 429848 kB' 'Inactive(file): 3521696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 622504 kB' 'Mapped: 214904 kB' 'Shmem: 5995448 kB' 'KReclaimable: 231916 kB' 'Slab: 798240 kB' 'SReclaimable: 231916 kB' 'SUnreclaim: 566324 kB' 'KernelStack: 20880 kB' 'PageTables: 9564 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 101982032 kB' 'Committed_AS: 8163496 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316220 kB' 'VmallocChunk: 0 kB' 'Percpu: 72960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 2479060 kB' 'DirectMap2M: 39143424 kB' 'DirectMap1G: 160432128 kB' 00:04:12.312 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.312 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.312 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.312 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.312 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.312 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.312 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.313 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.314 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.314 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.314 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.314 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.314 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.314 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.314 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.314 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.314 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.314 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.314 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.314 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.314 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.314 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.314 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.314 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.314 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.314 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.314 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.314 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.314 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.314 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.314 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.314 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.314 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.314 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.314 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.314 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.314 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.314 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.314 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.314 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.314 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.314 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.314 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.314 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.314 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.314 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.314 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.314 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.314 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.314 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.314 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.314 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.314 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.314 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.314 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.314 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.314 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.314 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.314 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.314 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.314 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.314 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.314 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.314 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:12.314 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:12.314 14:38:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:12.314 14:38:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:12.314 14:38:46 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:12.314 14:38:46 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:12.314 14:38:46 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:12.314 14:38:46 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:12.314 14:38:46 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:12.314 14:38:46 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:12.314 14:38:46 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:12.314 14:38:46 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:12.314 14:38:46 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:12.314 14:38:46 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:12.314 14:38:46 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:12.314 14:38:46 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:12.314 14:38:46 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:12.314 14:38:46 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:12.314 14:38:46 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:12.314 14:38:46 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:12.314 14:38:46 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:12.314 14:38:46 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:12.314 14:38:46 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:12.314 14:38:46 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:12.314 14:38:46 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:12.314 14:38:46 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:12.314 14:38:46 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:12.314 14:38:46 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:12.314 14:38:46 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:12.314 14:38:46 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:12.314 14:38:46 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:12.314 14:38:46 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:12.314 14:38:46 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:12.314 14:38:46 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:12.314 14:38:46 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:12.314 14:38:46 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:12.314 14:38:46 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:12.314 14:38:46 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:12.314 14:38:46 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:12.314 ************************************ 00:04:12.314 START TEST default_setup 00:04:12.314 ************************************ 00:04:12.314 14:38:46 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:04:12.314 14:38:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:12.314 14:38:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:12.314 14:38:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:12.314 14:38:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:12.314 14:38:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:12.314 14:38:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:12.314 14:38:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:12.314 14:38:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:12.314 14:38:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:12.314 14:38:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:12.314 14:38:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:12.314 14:38:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:12.314 14:38:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:12.314 14:38:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:12.314 14:38:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:12.314 14:38:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:12.314 14:38:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:12.314 14:38:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:12.314 14:38:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:12.314 14:38:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:12.314 14:38:46 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:12.314 14:38:46 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:14.845 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:14.845 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:14.845 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:14.845 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:14.845 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:14.845 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:14.845 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:14.845 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:14.845 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:14.845 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:14.845 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:14.845 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:14.845 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:14.845 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:14.845 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:14.845 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:16.226 0000:5f:00.0 (8086 0a54): nvme -> vfio-pci 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 175397296 kB' 'MemAvailable: 178320584 kB' 'Buffers: 4132 kB' 'Cached: 9942964 kB' 'SwapCached: 0 kB' 'Active: 7062852 kB' 'Inactive: 3521696 kB' 'Active(anon): 6633004 kB' 'Inactive(anon): 0 kB' 'Active(file): 429848 kB' 'Inactive(file): 3521696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 640244 kB' 'Mapped: 214756 kB' 'Shmem: 5995552 kB' 'KReclaimable: 231916 kB' 'Slab: 796484 kB' 'SReclaimable: 231916 kB' 'SUnreclaim: 564568 kB' 'KernelStack: 21104 kB' 'PageTables: 9928 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 8187176 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316284 kB' 'VmallocChunk: 0 kB' 'Percpu: 72960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2479060 kB' 'DirectMap2M: 39143424 kB' 'DirectMap1G: 160432128 kB' 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.226 14:38:49 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.226 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.226 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 175395432 kB' 'MemAvailable: 178318720 kB' 'Buffers: 4132 kB' 'Cached: 9942964 kB' 'SwapCached: 0 kB' 'Active: 7063656 kB' 'Inactive: 3521696 kB' 'Active(anon): 6633808 kB' 'Inactive(anon): 0 kB' 'Active(file): 429848 kB' 'Inactive(file): 3521696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 641120 kB' 'Mapped: 214832 kB' 'Shmem: 5995552 kB' 'KReclaimable: 231916 kB' 'Slab: 796508 kB' 'SReclaimable: 231916 kB' 'SUnreclaim: 564592 kB' 'KernelStack: 21216 kB' 'PageTables: 10520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 8187440 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316396 kB' 'VmallocChunk: 0 kB' 'Percpu: 72960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2479060 kB' 'DirectMap2M: 39143424 kB' 'DirectMap1G: 160432128 kB' 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.227 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 175398452 kB' 'MemAvailable: 178321740 kB' 'Buffers: 4132 kB' 'Cached: 9942964 kB' 'SwapCached: 0 kB' 'Active: 7064072 kB' 'Inactive: 3521696 kB' 'Active(anon): 6634224 kB' 'Inactive(anon): 0 kB' 'Active(file): 429848 kB' 'Inactive(file): 3521696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 642476 kB' 'Mapped: 214756 kB' 'Shmem: 5995552 kB' 'KReclaimable: 231916 kB' 'Slab: 796468 kB' 'SReclaimable: 231916 kB' 'SUnreclaim: 564552 kB' 'KernelStack: 21152 kB' 'PageTables: 9560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 8187460 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316348 kB' 'VmallocChunk: 0 kB' 'Percpu: 72960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2479060 kB' 'DirectMap2M: 39143424 kB' 'DirectMap1G: 160432128 kB' 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.228 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:16.229 nr_hugepages=1024 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:16.229 resv_hugepages=0 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:16.229 surplus_hugepages=0 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:16.229 anon_hugepages=0 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.229 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 175396760 kB' 'MemAvailable: 178320048 kB' 'Buffers: 4132 kB' 'Cached: 9943004 kB' 'SwapCached: 0 kB' 'Active: 7063700 kB' 'Inactive: 3521696 kB' 'Active(anon): 6633852 kB' 'Inactive(anon): 0 kB' 'Active(file): 429848 kB' 'Inactive(file): 3521696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 642100 kB' 'Mapped: 214756 kB' 'Shmem: 5995592 kB' 'KReclaimable: 231916 kB' 'Slab: 796472 kB' 'SReclaimable: 231916 kB' 'SUnreclaim: 564556 kB' 'KernelStack: 20944 kB' 'PageTables: 9180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 8187484 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316188 kB' 'VmallocChunk: 0 kB' 'Percpu: 72960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2479060 kB' 'DirectMap2M: 39143424 kB' 'DirectMap1G: 160432128 kB' 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.230 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 90824248 kB' 'MemUsed: 6791380 kB' 'SwapCached: 0 kB' 'Active: 3064372 kB' 'Inactive: 196544 kB' 'Active(anon): 2817888 kB' 'Inactive(anon): 0 kB' 'Active(file): 246484 kB' 'Inactive(file): 196544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2820612 kB' 'Mapped: 166644 kB' 'AnonPages: 443952 kB' 'Shmem: 2377584 kB' 'KernelStack: 14056 kB' 'PageTables: 7760 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 105140 kB' 'Slab: 376524 kB' 'SReclaimable: 105140 kB' 'SUnreclaim: 271384 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.231 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.232 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.233 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.233 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.233 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.233 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.233 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:16.233 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:16.233 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:16.233 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:16.233 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:16.233 14:38:50 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:16.233 14:38:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:16.233 14:38:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:16.233 14:38:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:16.233 14:38:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:16.233 14:38:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:16.233 node0=1024 expecting 1024 00:04:16.233 14:38:50 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:16.233 00:04:16.233 real 0m4.031s 00:04:16.233 user 0m1.021s 00:04:16.233 sys 0m1.600s 00:04:16.233 14:38:50 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:16.233 14:38:50 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:16.233 ************************************ 00:04:16.233 END TEST default_setup 00:04:16.233 ************************************ 00:04:16.491 14:38:50 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:16.491 14:38:50 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:16.491 14:38:50 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:16.491 14:38:50 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:16.491 14:38:50 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:16.491 ************************************ 00:04:16.491 START TEST per_node_1G_alloc 00:04:16.491 ************************************ 00:04:16.491 14:38:50 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:04:16.491 14:38:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:16.491 14:38:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:16.491 14:38:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:16.491 14:38:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:16.491 14:38:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:16.491 14:38:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:16.491 14:38:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:16.491 14:38:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:16.491 14:38:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:16.491 14:38:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:16.491 14:38:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:16.491 14:38:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:16.491 14:38:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:16.491 14:38:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:16.491 14:38:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:16.491 14:38:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:16.491 14:38:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:16.491 14:38:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:16.491 14:38:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:16.491 14:38:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:16.491 14:38:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:16.491 14:38:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:16.491 14:38:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:16.491 14:38:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:16.491 14:38:50 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:16.491 14:38:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:16.491 14:38:50 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:19.039 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:19.039 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:19.039 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:19.039 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:19.039 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:19.039 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:19.039 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:19.039 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:19.039 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:19.039 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:19.039 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:19.039 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:19.039 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:19.039 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:19.039 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:19.039 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:19.039 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:19.039 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:19.039 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:19.039 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:19.039 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:19.039 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:19.039 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:19.039 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:19.039 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:19.039 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:19.039 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:19.039 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:19.039 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:19.039 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:19.039 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.039 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.039 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.039 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.039 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.039 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.039 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.039 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.039 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 175411308 kB' 'MemAvailable: 178334596 kB' 'Buffers: 4132 kB' 'Cached: 9943112 kB' 'SwapCached: 0 kB' 'Active: 7070712 kB' 'Inactive: 3521696 kB' 'Active(anon): 6640864 kB' 'Inactive(anon): 0 kB' 'Active(file): 429848 kB' 'Inactive(file): 3521696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 648556 kB' 'Mapped: 214808 kB' 'Shmem: 5995700 kB' 'KReclaimable: 231916 kB' 'Slab: 798152 kB' 'SReclaimable: 231916 kB' 'SUnreclaim: 566236 kB' 'KernelStack: 22176 kB' 'PageTables: 14328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 8188088 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316524 kB' 'VmallocChunk: 0 kB' 'Percpu: 72960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2479060 kB' 'DirectMap2M: 39143424 kB' 'DirectMap1G: 160432128 kB' 00:04:19.039 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.039 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.039 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.039 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.039 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.039 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.039 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.039 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.039 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.039 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.039 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.039 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.039 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.039 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.039 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.039 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.039 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.039 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.039 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.039 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.039 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.039 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.039 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.039 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.039 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.039 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.039 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.039 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.039 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.039 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.039 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.039 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.039 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.039 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.039 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.039 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.039 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.039 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.039 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.039 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.039 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.039 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:19.040 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 175415904 kB' 'MemAvailable: 178339192 kB' 'Buffers: 4132 kB' 'Cached: 9943112 kB' 'SwapCached: 0 kB' 'Active: 7066936 kB' 'Inactive: 3521696 kB' 'Active(anon): 6637088 kB' 'Inactive(anon): 0 kB' 'Active(file): 429848 kB' 'Inactive(file): 3521696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 644784 kB' 'Mapped: 214868 kB' 'Shmem: 5995700 kB' 'KReclaimable: 231916 kB' 'Slab: 797828 kB' 'SReclaimable: 231916 kB' 'SUnreclaim: 565912 kB' 'KernelStack: 21328 kB' 'PageTables: 11328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 8185496 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316316 kB' 'VmallocChunk: 0 kB' 'Percpu: 72960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2479060 kB' 'DirectMap2M: 39143424 kB' 'DirectMap1G: 160432128 kB' 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.041 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.042 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 175417152 kB' 'MemAvailable: 178340440 kB' 'Buffers: 4132 kB' 'Cached: 9943132 kB' 'SwapCached: 0 kB' 'Active: 7065772 kB' 'Inactive: 3521696 kB' 'Active(anon): 6635924 kB' 'Inactive(anon): 0 kB' 'Active(file): 429848 kB' 'Inactive(file): 3521696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 643560 kB' 'Mapped: 215280 kB' 'Shmem: 5995720 kB' 'KReclaimable: 231916 kB' 'Slab: 797708 kB' 'SReclaimable: 231916 kB' 'SUnreclaim: 565792 kB' 'KernelStack: 20992 kB' 'PageTables: 10180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 8187008 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316252 kB' 'VmallocChunk: 0 kB' 'Percpu: 72960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2479060 kB' 'DirectMap2M: 39143424 kB' 'DirectMap1G: 160432128 kB' 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.043 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:19.044 nr_hugepages=1024 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:19.044 resv_hugepages=0 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:19.044 surplus_hugepages=0 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:19.044 anon_hugepages=0 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.044 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.045 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 175409340 kB' 'MemAvailable: 178332628 kB' 'Buffers: 4132 kB' 'Cached: 9943156 kB' 'SwapCached: 0 kB' 'Active: 7070416 kB' 'Inactive: 3521696 kB' 'Active(anon): 6640568 kB' 'Inactive(anon): 0 kB' 'Active(file): 429848 kB' 'Inactive(file): 3521696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 648176 kB' 'Mapped: 215280 kB' 'Shmem: 5995744 kB' 'KReclaimable: 231916 kB' 'Slab: 797684 kB' 'SReclaimable: 231916 kB' 'SUnreclaim: 565768 kB' 'KernelStack: 20928 kB' 'PageTables: 9968 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 8191664 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316240 kB' 'VmallocChunk: 0 kB' 'Percpu: 72960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2479060 kB' 'DirectMap2M: 39143424 kB' 'DirectMap1G: 160432128 kB' 00:04:19.045 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.045 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.045 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.045 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.045 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.045 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.045 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.045 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.045 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.045 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.045 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.045 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.045 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.045 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.045 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.045 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.045 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.045 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.045 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.045 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.045 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.045 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.045 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.045 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.045 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.045 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.045 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.045 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.045 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.045 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.307 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 91890616 kB' 'MemUsed: 5725012 kB' 'SwapCached: 0 kB' 'Active: 3065244 kB' 'Inactive: 196544 kB' 'Active(anon): 2818760 kB' 'Inactive(anon): 0 kB' 'Active(file): 246484 kB' 'Inactive(file): 196544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2820696 kB' 'Mapped: 167012 kB' 'AnonPages: 444368 kB' 'Shmem: 2377668 kB' 'KernelStack: 13704 kB' 'PageTables: 6348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 105140 kB' 'Slab: 377484 kB' 'SReclaimable: 105140 kB' 'SUnreclaim: 272344 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.308 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.309 14:38:52 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.309 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.309 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.309 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.309 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.309 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.309 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.309 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.309 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.309 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.309 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.309 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.309 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.309 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.309 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.309 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.309 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.309 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:19.309 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:19.309 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:19.309 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:19.309 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:19.309 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:19.309 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:19.309 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:04:19.309 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:19.309 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:19.309 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.309 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:19.309 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:19.309 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.309 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93765536 kB' 'MemFree: 83527056 kB' 'MemUsed: 10238480 kB' 'SwapCached: 0 kB' 'Active: 3999600 kB' 'Inactive: 3325152 kB' 'Active(anon): 3816236 kB' 'Inactive(anon): 0 kB' 'Active(file): 183364 kB' 'Inactive(file): 3325152 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7126616 kB' 'Mapped: 48104 kB' 'AnonPages: 198356 kB' 'Shmem: 3618100 kB' 'KernelStack: 7208 kB' 'PageTables: 3136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 126776 kB' 'Slab: 420136 kB' 'SReclaimable: 126776 kB' 'SUnreclaim: 293360 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.310 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.311 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.311 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.311 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.311 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.311 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.311 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.311 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.311 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.311 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.311 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.311 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.311 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.311 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.311 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.311 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.311 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.311 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.311 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.311 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.311 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.311 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.311 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.311 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.311 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.311 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.311 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:19.311 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:19.311 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:19.311 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:19.311 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:19.311 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:19.311 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:19.311 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:19.311 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:19.311 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:19.311 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:19.311 node0=512 expecting 512 00:04:19.311 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:19.311 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:19.311 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:19.311 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:19.311 node1=512 expecting 512 00:04:19.311 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:19.311 00:04:19.311 real 0m2.823s 00:04:19.311 user 0m1.126s 00:04:19.311 sys 0m1.733s 00:04:19.311 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:19.311 14:38:53 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:19.311 ************************************ 00:04:19.311 END TEST per_node_1G_alloc 00:04:19.311 ************************************ 00:04:19.311 14:38:53 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:19.311 14:38:53 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:19.311 14:38:53 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:19.311 14:38:53 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:19.311 14:38:53 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:19.311 ************************************ 00:04:19.311 START TEST even_2G_alloc 00:04:19.311 ************************************ 00:04:19.311 14:38:53 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:04:19.311 14:38:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:19.311 14:38:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:19.311 14:38:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:19.311 14:38:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:19.311 14:38:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:19.311 14:38:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:19.311 14:38:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:19.311 14:38:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:19.311 14:38:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:19.311 14:38:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:19.311 14:38:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:19.311 14:38:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:19.311 14:38:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:19.311 14:38:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:19.311 14:38:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:19.311 14:38:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:19.311 14:38:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:04:19.311 14:38:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:19.311 14:38:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:19.311 14:38:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:19.311 14:38:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:19.311 14:38:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:19.311 14:38:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:19.311 14:38:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:19.311 14:38:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:19.311 14:38:53 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:19.311 14:38:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:19.311 14:38:53 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:21.847 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:21.847 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:21.847 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:21.847 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:21.847 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:21.847 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:21.847 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:21.847 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:21.847 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:21.847 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:21.847 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:21.847 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:21.847 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:21.847 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:21.847 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:21.847 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:21.847 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:22.117 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:22.117 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:22.117 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:22.117 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:22.117 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:22.117 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:22.117 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:22.117 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:22.117 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:22.117 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:22.117 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:22.117 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:22.117 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:22.117 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.117 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.117 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.117 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.117 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.117 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.117 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 175424804 kB' 'MemAvailable: 178348092 kB' 'Buffers: 4132 kB' 'Cached: 9943268 kB' 'SwapCached: 0 kB' 'Active: 7063360 kB' 'Inactive: 3521696 kB' 'Active(anon): 6633512 kB' 'Inactive(anon): 0 kB' 'Active(file): 429848 kB' 'Inactive(file): 3521696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 640376 kB' 'Mapped: 214724 kB' 'Shmem: 5995856 kB' 'KReclaimable: 231916 kB' 'Slab: 796952 kB' 'SReclaimable: 231916 kB' 'SUnreclaim: 565036 kB' 'KernelStack: 20928 kB' 'PageTables: 9340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 8206516 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316444 kB' 'VmallocChunk: 0 kB' 'Percpu: 72960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2479060 kB' 'DirectMap2M: 39143424 kB' 'DirectMap1G: 160432128 kB' 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.118 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 175424256 kB' 'MemAvailable: 178347544 kB' 'Buffers: 4132 kB' 'Cached: 9943268 kB' 'SwapCached: 0 kB' 'Active: 7063152 kB' 'Inactive: 3521696 kB' 'Active(anon): 6633304 kB' 'Inactive(anon): 0 kB' 'Active(file): 429848 kB' 'Inactive(file): 3521696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 640608 kB' 'Mapped: 214724 kB' 'Shmem: 5995856 kB' 'KReclaimable: 231916 kB' 'Slab: 797000 kB' 'SReclaimable: 231916 kB' 'SUnreclaim: 565084 kB' 'KernelStack: 20960 kB' 'PageTables: 9436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 8206532 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316412 kB' 'VmallocChunk: 0 kB' 'Percpu: 72960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2479060 kB' 'DirectMap2M: 39143424 kB' 'DirectMap1G: 160432128 kB' 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.119 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.120 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 175423688 kB' 'MemAvailable: 178346976 kB' 'Buffers: 4132 kB' 'Cached: 9943288 kB' 'SwapCached: 0 kB' 'Active: 7063232 kB' 'Inactive: 3521696 kB' 'Active(anon): 6633384 kB' 'Inactive(anon): 0 kB' 'Active(file): 429848 kB' 'Inactive(file): 3521696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 640672 kB' 'Mapped: 214724 kB' 'Shmem: 5995876 kB' 'KReclaimable: 231916 kB' 'Slab: 796976 kB' 'SReclaimable: 231916 kB' 'SUnreclaim: 565060 kB' 'KernelStack: 20944 kB' 'PageTables: 9384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 8206556 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316396 kB' 'VmallocChunk: 0 kB' 'Percpu: 72960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2479060 kB' 'DirectMap2M: 39143424 kB' 'DirectMap1G: 160432128 kB' 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.121 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.122 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.122 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.122 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.122 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.122 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.122 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.122 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.122 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.122 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.122 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.122 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.122 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.122 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.122 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.122 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.122 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.122 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.122 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.122 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.122 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.122 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.122 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.122 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.122 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.122 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.122 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.122 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.122 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.122 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.122 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.122 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.122 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.122 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.122 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.122 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.122 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.122 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.122 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.122 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.122 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.122 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.122 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.122 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.122 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.122 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.122 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.122 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.122 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.122 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.122 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.122 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.122 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.122 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.122 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.122 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.122 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.122 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.122 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.122 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.122 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.122 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.122 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.122 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.122 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.122 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.122 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.122 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.122 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.122 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.122 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.122 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.122 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.122 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.122 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.122 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.122 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.122 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.122 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.122 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.122 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.122 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.122 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.122 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.122 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.122 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.122 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.122 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:22.123 nr_hugepages=1024 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:22.123 resv_hugepages=0 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:22.123 surplus_hugepages=0 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:22.123 anon_hugepages=0 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 175424236 kB' 'MemAvailable: 178347524 kB' 'Buffers: 4132 kB' 'Cached: 9943308 kB' 'SwapCached: 0 kB' 'Active: 7063216 kB' 'Inactive: 3521696 kB' 'Active(anon): 6633368 kB' 'Inactive(anon): 0 kB' 'Active(file): 429848 kB' 'Inactive(file): 3521696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 640672 kB' 'Mapped: 214724 kB' 'Shmem: 5995896 kB' 'KReclaimable: 231916 kB' 'Slab: 796976 kB' 'SReclaimable: 231916 kB' 'SUnreclaim: 565060 kB' 'KernelStack: 20944 kB' 'PageTables: 9384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 8206576 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316396 kB' 'VmallocChunk: 0 kB' 'Percpu: 72960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2479060 kB' 'DirectMap2M: 39143424 kB' 'DirectMap1G: 160432128 kB' 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.123 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.124 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 91893620 kB' 'MemUsed: 5722008 kB' 'SwapCached: 0 kB' 'Active: 3065480 kB' 'Inactive: 196544 kB' 'Active(anon): 2818996 kB' 'Inactive(anon): 0 kB' 'Active(file): 246484 kB' 'Inactive(file): 196544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2820828 kB' 'Mapped: 165832 kB' 'AnonPages: 444352 kB' 'Shmem: 2377800 kB' 'KernelStack: 13752 kB' 'PageTables: 6240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 105140 kB' 'Slab: 376892 kB' 'SReclaimable: 105140 kB' 'SUnreclaim: 271752 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.125 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93765536 kB' 'MemFree: 83529860 kB' 'MemUsed: 10235676 kB' 'SwapCached: 0 kB' 'Active: 3997768 kB' 'Inactive: 3325152 kB' 'Active(anon): 3814404 kB' 'Inactive(anon): 0 kB' 'Active(file): 183364 kB' 'Inactive(file): 3325152 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7126636 kB' 'Mapped: 48816 kB' 'AnonPages: 196320 kB' 'Shmem: 3618120 kB' 'KernelStack: 7192 kB' 'PageTables: 3144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 126776 kB' 'Slab: 420084 kB' 'SReclaimable: 126776 kB' 'SUnreclaim: 293308 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.126 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.127 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.127 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.127 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.127 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.127 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.127 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.127 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.127 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.127 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.127 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.127 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.127 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.127 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.127 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.127 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.127 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.127 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.127 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.127 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.127 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.127 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.127 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.127 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.127 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.127 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.127 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.127 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.127 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.127 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.127 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.127 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.127 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.127 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.127 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.127 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.127 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.127 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.127 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.127 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.127 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.127 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.127 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.127 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.127 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.127 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.127 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.127 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.127 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.127 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.127 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.127 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.127 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.127 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.127 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.127 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.127 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.127 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.127 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.127 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.127 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.127 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.127 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.127 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.127 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.127 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.127 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.127 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.127 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.127 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.127 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.127 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.127 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.127 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.127 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.127 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.127 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.127 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.127 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.127 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.127 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.127 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.127 14:38:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.127 14:38:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.127 14:38:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.127 14:38:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.127 14:38:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.127 14:38:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.127 14:38:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.127 14:38:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.127 14:38:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.127 14:38:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.127 14:38:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.127 14:38:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.127 14:38:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.127 14:38:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.127 14:38:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.127 14:38:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.127 14:38:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.127 14:38:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.127 14:38:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.127 14:38:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.127 14:38:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.127 14:38:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.127 14:38:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.127 14:38:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.127 14:38:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.127 14:38:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.127 14:38:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.127 14:38:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.127 14:38:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.127 14:38:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.127 14:38:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.127 14:38:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.127 14:38:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.127 14:38:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.127 14:38:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.127 14:38:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.127 14:38:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.127 14:38:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.127 14:38:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.127 14:38:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.127 14:38:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.127 14:38:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.127 14:38:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:22.127 14:38:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:22.127 14:38:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:22.127 14:38:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:22.128 14:38:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:22.128 14:38:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:22.128 14:38:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:22.128 node0=512 expecting 512 00:04:22.128 14:38:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:22.128 14:38:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:22.128 14:38:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:22.128 14:38:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:22.128 node1=512 expecting 512 00:04:22.128 14:38:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:22.128 00:04:22.128 real 0m2.928s 00:04:22.128 user 0m1.213s 00:04:22.128 sys 0m1.786s 00:04:22.128 14:38:56 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:22.128 14:38:56 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:22.128 ************************************ 00:04:22.128 END TEST even_2G_alloc 00:04:22.128 ************************************ 00:04:22.387 14:38:56 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:22.387 14:38:56 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:22.387 14:38:56 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:22.387 14:38:56 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:22.387 14:38:56 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:22.387 ************************************ 00:04:22.387 START TEST odd_alloc 00:04:22.387 ************************************ 00:04:22.387 14:38:56 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:04:22.387 14:38:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:22.387 14:38:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:22.387 14:38:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:22.387 14:38:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:22.387 14:38:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:22.387 14:38:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:22.387 14:38:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:22.387 14:38:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:22.387 14:38:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:22.387 14:38:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:22.387 14:38:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:22.387 14:38:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:22.387 14:38:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:22.387 14:38:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:22.388 14:38:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:22.388 14:38:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:22.388 14:38:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:04:22.388 14:38:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:22.388 14:38:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:22.388 14:38:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:22.388 14:38:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:22.388 14:38:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:22.388 14:38:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:22.388 14:38:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:22.388 14:38:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:22.388 14:38:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:22.388 14:38:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:22.388 14:38:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:24.953 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:24.953 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:24.953 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:24.953 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:24.953 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:24.953 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:24.953 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:24.953 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:24.953 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:24.953 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:24.953 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:24.953 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:24.953 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:24.953 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:24.953 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:24.953 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:24.953 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:24.953 14:38:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:24.953 14:38:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:24.953 14:38:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:24.953 14:38:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:24.953 14:38:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:24.953 14:38:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:24.953 14:38:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:24.953 14:38:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:24.953 14:38:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:24.953 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:24.953 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:24.953 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:24.953 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.953 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.953 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.953 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.953 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.953 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.953 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.953 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.953 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 175399088 kB' 'MemAvailable: 178322376 kB' 'Buffers: 4132 kB' 'Cached: 9943420 kB' 'SwapCached: 0 kB' 'Active: 7064848 kB' 'Inactive: 3521696 kB' 'Active(anon): 6635000 kB' 'Inactive(anon): 0 kB' 'Active(file): 429848 kB' 'Inactive(file): 3521696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 641740 kB' 'Mapped: 214736 kB' 'Shmem: 5996008 kB' 'KReclaimable: 231916 kB' 'Slab: 796384 kB' 'SReclaimable: 231916 kB' 'SUnreclaim: 564468 kB' 'KernelStack: 21056 kB' 'PageTables: 9744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029584 kB' 'Committed_AS: 8207188 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316396 kB' 'VmallocChunk: 0 kB' 'Percpu: 72960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2479060 kB' 'DirectMap2M: 39143424 kB' 'DirectMap1G: 160432128 kB' 00:04:24.953 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.953 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.953 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.953 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.953 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.953 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.953 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.953 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.953 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.953 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.953 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.953 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.953 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.953 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.953 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.953 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.953 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.953 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.953 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.953 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.953 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.953 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.953 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.953 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.953 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.953 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.953 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.953 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.953 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.953 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.953 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.953 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.953 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:24.954 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 175399292 kB' 'MemAvailable: 178322580 kB' 'Buffers: 4132 kB' 'Cached: 9943424 kB' 'SwapCached: 0 kB' 'Active: 7064072 kB' 'Inactive: 3521696 kB' 'Active(anon): 6634224 kB' 'Inactive(anon): 0 kB' 'Active(file): 429848 kB' 'Inactive(file): 3521696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 641444 kB' 'Mapped: 214656 kB' 'Shmem: 5996012 kB' 'KReclaimable: 231916 kB' 'Slab: 796360 kB' 'SReclaimable: 231916 kB' 'SUnreclaim: 564444 kB' 'KernelStack: 21040 kB' 'PageTables: 9684 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029584 kB' 'Committed_AS: 8207204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316380 kB' 'VmallocChunk: 0 kB' 'Percpu: 72960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2479060 kB' 'DirectMap2M: 39143424 kB' 'DirectMap1G: 160432128 kB' 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.955 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 175399292 kB' 'MemAvailable: 178322580 kB' 'Buffers: 4132 kB' 'Cached: 9943424 kB' 'SwapCached: 0 kB' 'Active: 7064116 kB' 'Inactive: 3521696 kB' 'Active(anon): 6634268 kB' 'Inactive(anon): 0 kB' 'Active(file): 429848 kB' 'Inactive(file): 3521696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 641480 kB' 'Mapped: 214656 kB' 'Shmem: 5996012 kB' 'KReclaimable: 231916 kB' 'Slab: 796360 kB' 'SReclaimable: 231916 kB' 'SUnreclaim: 564444 kB' 'KernelStack: 21056 kB' 'PageTables: 9736 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029584 kB' 'Committed_AS: 8207224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316380 kB' 'VmallocChunk: 0 kB' 'Percpu: 72960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2479060 kB' 'DirectMap2M: 39143424 kB' 'DirectMap1G: 160432128 kB' 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.956 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.957 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:24.958 nr_hugepages=1025 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:24.958 resv_hugepages=0 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:24.958 surplus_hugepages=0 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:24.958 anon_hugepages=0 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 175399136 kB' 'MemAvailable: 178322424 kB' 'Buffers: 4132 kB' 'Cached: 9943480 kB' 'SwapCached: 0 kB' 'Active: 7063748 kB' 'Inactive: 3521696 kB' 'Active(anon): 6633900 kB' 'Inactive(anon): 0 kB' 'Active(file): 429848 kB' 'Inactive(file): 3521696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 641064 kB' 'Mapped: 214656 kB' 'Shmem: 5996068 kB' 'KReclaimable: 231916 kB' 'Slab: 796360 kB' 'SReclaimable: 231916 kB' 'SUnreclaim: 564444 kB' 'KernelStack: 21024 kB' 'PageTables: 9632 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029584 kB' 'Committed_AS: 8207244 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316380 kB' 'VmallocChunk: 0 kB' 'Percpu: 72960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2479060 kB' 'DirectMap2M: 39143424 kB' 'DirectMap1G: 160432128 kB' 00:04:24.958 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.220 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.220 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.220 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.220 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.220 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.220 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.220 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.220 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.220 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.220 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.220 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.220 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.220 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.220 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.220 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.220 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.220 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.220 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.220 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.220 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.220 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.220 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.220 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.220 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.220 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.220 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.220 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.220 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.220 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.220 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.220 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.220 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.220 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.220 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.220 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.220 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.220 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.220 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.220 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.220 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.220 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.220 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.220 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.220 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.220 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.220 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.221 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 91879084 kB' 'MemUsed: 5736544 kB' 'SwapCached: 0 kB' 'Active: 3064880 kB' 'Inactive: 196544 kB' 'Active(anon): 2818396 kB' 'Inactive(anon): 0 kB' 'Active(file): 246484 kB' 'Inactive(file): 196544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2820872 kB' 'Mapped: 165844 kB' 'AnonPages: 443696 kB' 'Shmem: 2377844 kB' 'KernelStack: 13848 kB' 'PageTables: 6588 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 105140 kB' 'Slab: 376252 kB' 'SReclaimable: 105140 kB' 'SUnreclaim: 271112 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.222 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93765536 kB' 'MemFree: 83520908 kB' 'MemUsed: 10244628 kB' 'SwapCached: 0 kB' 'Active: 3999164 kB' 'Inactive: 3325152 kB' 'Active(anon): 3815800 kB' 'Inactive(anon): 0 kB' 'Active(file): 183364 kB' 'Inactive(file): 3325152 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7126760 kB' 'Mapped: 48812 kB' 'AnonPages: 197628 kB' 'Shmem: 3618244 kB' 'KernelStack: 7176 kB' 'PageTables: 3044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 126776 kB' 'Slab: 420108 kB' 'SReclaimable: 126776 kB' 'SUnreclaim: 293332 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.223 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:25.224 node0=512 expecting 513 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:25.224 14:38:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:25.224 node1=513 expecting 512 00:04:25.225 14:38:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:25.225 00:04:25.225 real 0m2.885s 00:04:25.225 user 0m1.137s 00:04:25.225 sys 0m1.803s 00:04:25.225 14:38:58 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:25.225 14:38:58 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:25.225 ************************************ 00:04:25.225 END TEST odd_alloc 00:04:25.225 ************************************ 00:04:25.225 14:38:58 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:25.225 14:38:58 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:25.225 14:38:58 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:25.225 14:38:58 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:25.225 14:38:58 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:25.225 ************************************ 00:04:25.225 START TEST custom_alloc 00:04:25.225 ************************************ 00:04:25.225 14:38:59 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:04:25.225 14:38:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:25.225 14:38:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:25.225 14:38:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:25.225 14:38:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:25.225 14:38:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:25.225 14:38:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:25.225 14:38:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:25.225 14:38:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:25.225 14:38:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:25.225 14:38:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:25.225 14:38:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:25.225 14:38:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:25.225 14:38:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:25.225 14:38:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:25.225 14:38:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:25.225 14:38:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:25.225 14:38:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:25.225 14:38:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:25.225 14:38:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:25.225 14:38:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:25.225 14:38:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:25.225 14:38:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:25.225 14:38:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:25.225 14:38:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:25.225 14:38:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:25.225 14:38:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:25.225 14:38:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:25.225 14:38:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:25.225 14:38:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:25.225 14:38:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:25.225 14:38:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:25.225 14:38:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:25.225 14:38:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:25.225 14:38:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:25.225 14:38:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:25.225 14:38:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:25.225 14:38:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:25.225 14:38:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:25.225 14:38:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:25.225 14:38:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:25.225 14:38:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:25.225 14:38:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:25.225 14:38:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:25.225 14:38:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:25.225 14:38:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:25.225 14:38:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:25.225 14:38:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:25.225 14:38:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:25.225 14:38:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:25.225 14:38:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:25.225 14:38:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:25.225 14:38:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:25.225 14:38:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:25.225 14:38:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:25.225 14:38:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:25.225 14:38:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:25.225 14:38:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:25.225 14:38:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:25.225 14:38:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:25.225 14:38:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:25.225 14:38:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:25.225 14:38:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:25.225 14:38:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:25.225 14:38:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:25.225 14:38:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:25.225 14:38:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:25.225 14:38:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:25.225 14:38:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:25.225 14:38:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:25.225 14:38:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:25.225 14:38:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:25.225 14:38:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:27.754 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:27.754 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:27.754 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:27.754 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:27.755 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:27.755 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:27.755 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:27.755 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:27.755 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:27.755 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:27.755 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:27.755 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:27.755 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:27.755 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:27.755 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:27.755 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:27.755 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:28.015 14:39:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:28.015 14:39:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:28.015 14:39:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:28.015 14:39:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:28.015 14:39:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:28.015 14:39:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:28.015 14:39:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:28.015 14:39:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:28.015 14:39:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:28.015 14:39:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:28.015 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:28.015 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:28.015 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:28.015 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:28.015 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.015 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:28.015 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:28.015 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.015 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.015 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.015 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.015 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 174262872 kB' 'MemAvailable: 177186176 kB' 'Buffers: 4132 kB' 'Cached: 9943564 kB' 'SwapCached: 0 kB' 'Active: 7068256 kB' 'Inactive: 3521696 kB' 'Active(anon): 6638408 kB' 'Inactive(anon): 0 kB' 'Active(file): 429848 kB' 'Inactive(file): 3521696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 645224 kB' 'Mapped: 215692 kB' 'Shmem: 5996152 kB' 'KReclaimable: 231948 kB' 'Slab: 796852 kB' 'SReclaimable: 231948 kB' 'SUnreclaim: 564904 kB' 'KernelStack: 21056 kB' 'PageTables: 9784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506320 kB' 'Committed_AS: 8217992 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316384 kB' 'VmallocChunk: 0 kB' 'Percpu: 72960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2479060 kB' 'DirectMap2M: 39143424 kB' 'DirectMap1G: 160432128 kB' 00:04:28.015 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.015 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.015 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.015 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.015 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.015 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.015 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.016 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.017 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.017 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.017 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.017 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.017 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.017 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.017 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.017 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.017 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.017 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.017 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.017 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.017 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.017 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.017 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.017 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.017 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.017 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.017 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.017 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.017 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.017 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.017 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.017 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.017 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.017 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.017 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.017 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.017 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.017 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.017 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.017 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.017 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.017 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.017 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.017 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.017 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.017 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.017 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.017 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.017 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.017 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.017 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.017 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.017 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.017 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.017 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.017 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.017 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.017 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.017 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.017 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.017 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.017 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.017 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.017 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:28.017 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:28.017 14:39:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:28.017 14:39:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:28.017 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:28.017 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:28.017 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:28.017 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:28.017 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.017 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:28.017 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:28.017 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.017 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.017 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 174267188 kB' 'MemAvailable: 177190492 kB' 'Buffers: 4132 kB' 'Cached: 9943576 kB' 'SwapCached: 0 kB' 'Active: 7066240 kB' 'Inactive: 3521696 kB' 'Active(anon): 6636392 kB' 'Inactive(anon): 0 kB' 'Active(file): 429848 kB' 'Inactive(file): 3521696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 643688 kB' 'Mapped: 215432 kB' 'Shmem: 5996164 kB' 'KReclaimable: 231948 kB' 'Slab: 796812 kB' 'SReclaimable: 231948 kB' 'SUnreclaim: 564864 kB' 'KernelStack: 21072 kB' 'PageTables: 9824 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506320 kB' 'Committed_AS: 8216388 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316364 kB' 'VmallocChunk: 0 kB' 'Percpu: 72960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2479060 kB' 'DirectMap2M: 39143424 kB' 'DirectMap1G: 160432128 kB' 00:04:28.017 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.017 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.017 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.017 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.017 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.017 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.017 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.017 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.017 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.017 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.017 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.017 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.017 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.017 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.017 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.017 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.017 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.017 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.018 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:28.019 14:39:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:28.020 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:28.020 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:28.020 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:28.020 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:28.020 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.020 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:28.020 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:28.020 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.020 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.020 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.020 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.020 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 174269800 kB' 'MemAvailable: 177193104 kB' 'Buffers: 4132 kB' 'Cached: 9943596 kB' 'SwapCached: 0 kB' 'Active: 7063316 kB' 'Inactive: 3521696 kB' 'Active(anon): 6633468 kB' 'Inactive(anon): 0 kB' 'Active(file): 429848 kB' 'Inactive(file): 3521696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 640652 kB' 'Mapped: 215184 kB' 'Shmem: 5996184 kB' 'KReclaimable: 231948 kB' 'Slab: 796812 kB' 'SReclaimable: 231948 kB' 'SUnreclaim: 564864 kB' 'KernelStack: 21056 kB' 'PageTables: 9752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506320 kB' 'Committed_AS: 8213504 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316380 kB' 'VmallocChunk: 0 kB' 'Percpu: 72960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2479060 kB' 'DirectMap2M: 39143424 kB' 'DirectMap1G: 160432128 kB' 00:04:28.020 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.020 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.020 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.020 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.020 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.020 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.020 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.020 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.020 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.020 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.020 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.020 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.020 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.020 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.020 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.020 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.020 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.020 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.020 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.020 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.020 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.020 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.020 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.020 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.020 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.020 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.020 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.020 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.020 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.020 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.020 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.020 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.020 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.020 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.020 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.020 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.020 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.020 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.020 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.020 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.020 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.020 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.020 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.020 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.020 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.020 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.020 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.020 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.020 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.020 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.020 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.020 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.020 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.020 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.020 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.020 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.020 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.020 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.020 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.020 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.020 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.020 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.020 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.020 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.020 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.020 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.020 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.020 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.020 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.020 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.020 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.020 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.020 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.020 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.020 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.021 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.021 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.021 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.021 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.021 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.021 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.021 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.021 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.021 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.021 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.021 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.021 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.021 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.021 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.021 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.021 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.021 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.021 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.021 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.021 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.021 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.021 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.021 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.021 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.021 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.021 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.021 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.021 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.021 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.021 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.021 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.021 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.021 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.021 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.021 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.021 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.021 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.021 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.021 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.021 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.021 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.021 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.021 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.021 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.021 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.021 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.021 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.021 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.021 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.021 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.021 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.021 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.021 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.021 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.021 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.021 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.021 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.021 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.021 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.021 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.021 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.021 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.021 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.021 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.021 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.021 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.021 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.021 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.021 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.021 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.021 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.021 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.021 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.021 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.021 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.021 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.022 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.022 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.022 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.022 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.022 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.022 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.022 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.022 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.022 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.022 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.022 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.022 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.022 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.022 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.022 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.022 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.022 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.022 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.022 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.022 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.022 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.022 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.022 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.022 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.022 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.022 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.022 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.022 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.022 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.022 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.022 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.022 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.022 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.022 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.022 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.022 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.022 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.022 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.022 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.022 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.022 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.022 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.022 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.022 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.022 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.022 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.022 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.022 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.022 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.022 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.022 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:28.022 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:28.022 14:39:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:28.022 14:39:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:28.022 nr_hugepages=1536 00:04:28.022 14:39:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:28.022 resv_hugepages=0 00:04:28.022 14:39:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:28.022 surplus_hugepages=0 00:04:28.022 14:39:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:28.022 anon_hugepages=0 00:04:28.022 14:39:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:28.022 14:39:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:28.022 14:39:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:28.022 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:28.022 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:28.022 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:28.022 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:28.022 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.022 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:28.022 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:28.022 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.022 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.022 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.022 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.022 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 174265988 kB' 'MemAvailable: 177189292 kB' 'Buffers: 4132 kB' 'Cached: 9943616 kB' 'SwapCached: 0 kB' 'Active: 7068140 kB' 'Inactive: 3521696 kB' 'Active(anon): 6638292 kB' 'Inactive(anon): 0 kB' 'Active(file): 429848 kB' 'Inactive(file): 3521696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 645432 kB' 'Mapped: 215184 kB' 'Shmem: 5996204 kB' 'KReclaimable: 231948 kB' 'Slab: 796812 kB' 'SReclaimable: 231948 kB' 'SUnreclaim: 564864 kB' 'KernelStack: 21056 kB' 'PageTables: 9744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506320 kB' 'Committed_AS: 8218424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316384 kB' 'VmallocChunk: 0 kB' 'Percpu: 72960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2479060 kB' 'DirectMap2M: 39143424 kB' 'DirectMap1G: 160432128 kB' 00:04:28.022 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.022 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.022 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.022 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.022 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.022 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.022 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.022 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.022 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.022 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.022 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.022 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.022 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.022 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.022 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.022 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.023 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.024 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 91827932 kB' 'MemUsed: 5787696 kB' 'SwapCached: 0 kB' 'Active: 3067824 kB' 'Inactive: 196544 kB' 'Active(anon): 2821340 kB' 'Inactive(anon): 0 kB' 'Active(file): 246484 kB' 'Inactive(file): 196544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2820872 kB' 'Mapped: 166680 kB' 'AnonPages: 446764 kB' 'Shmem: 2377844 kB' 'KernelStack: 13800 kB' 'PageTables: 6472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 105172 kB' 'Slab: 376536 kB' 'SReclaimable: 105172 kB' 'SUnreclaim: 271364 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.025 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.026 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.026 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.026 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.026 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.026 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.026 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.026 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.026 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.026 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.026 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.026 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.026 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.026 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.026 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.026 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.026 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.026 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.026 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.026 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.026 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.026 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.026 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.026 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.026 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.026 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.026 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.026 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.026 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.026 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.026 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.026 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.026 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.026 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.026 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.026 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.026 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.026 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.026 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.026 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.026 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.026 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.026 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.026 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.026 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.026 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.026 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.026 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.026 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.026 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.026 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.026 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.026 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:28.026 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:28.026 14:39:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:28.026 14:39:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:28.026 14:39:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:28.026 14:39:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:28.026 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:28.026 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:28.026 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:28.026 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:28.026 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.026 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:28.026 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:28.026 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.026 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.026 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.026 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.026 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93765536 kB' 'MemFree: 82437552 kB' 'MemUsed: 11327984 kB' 'SwapCached: 0 kB' 'Active: 4000528 kB' 'Inactive: 3325152 kB' 'Active(anon): 3817164 kB' 'Inactive(anon): 0 kB' 'Active(file): 183364 kB' 'Inactive(file): 3325152 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7126876 kB' 'Mapped: 48820 kB' 'AnonPages: 198936 kB' 'Shmem: 3618360 kB' 'KernelStack: 7240 kB' 'PageTables: 3248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 126776 kB' 'Slab: 420276 kB' 'SReclaimable: 126776 kB' 'SUnreclaim: 293500 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:28.026 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.026 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.026 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.026 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.026 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.027 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.027 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.027 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.027 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.027 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.027 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.027 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.027 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.027 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.027 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.027 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.027 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.027 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.027 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.027 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.027 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.027 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.027 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.027 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.027 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.027 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.027 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.027 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.027 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.027 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.027 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.027 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.027 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.027 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.027 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.027 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.027 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.027 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.027 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.027 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.027 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.027 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.027 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.027 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.027 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.027 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.027 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.027 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.027 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.027 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.027 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.027 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.027 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.027 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.027 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.027 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.027 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.027 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.027 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.027 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.027 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.027 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.027 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.027 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.027 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.027 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.027 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.027 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.027 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.027 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.027 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.027 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.027 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.027 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.027 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.027 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.027 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.027 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.027 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.027 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.027 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.027 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.027 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.027 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.027 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.027 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.027 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.027 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.027 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.027 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.027 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.027 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.027 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.286 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.286 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.286 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.286 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.286 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.286 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.286 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.286 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.286 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.286 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.286 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.286 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.286 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.286 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.286 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.286 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.286 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.286 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.286 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.286 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.286 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.286 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.286 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.286 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.286 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.286 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.286 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.286 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.286 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.287 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.287 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.287 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.287 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.287 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.287 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.287 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.287 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.287 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.287 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.287 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.287 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.287 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.287 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.287 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.287 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.287 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.287 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.287 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.287 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.287 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.287 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.287 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.287 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:28.287 14:39:01 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:28.287 14:39:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:28.287 14:39:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:28.287 14:39:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:28.287 14:39:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:28.287 14:39:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:28.287 node0=512 expecting 512 00:04:28.287 14:39:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:28.287 14:39:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:28.287 14:39:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:28.287 14:39:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:28.287 node1=1024 expecting 1024 00:04:28.287 14:39:01 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:28.287 00:04:28.287 real 0m2.917s 00:04:28.287 user 0m1.173s 00:04:28.287 sys 0m1.807s 00:04:28.287 14:39:01 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:28.287 14:39:01 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:28.287 ************************************ 00:04:28.287 END TEST custom_alloc 00:04:28.287 ************************************ 00:04:28.287 14:39:01 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:28.287 14:39:01 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:28.287 14:39:01 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:28.287 14:39:01 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:28.287 14:39:01 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:28.287 ************************************ 00:04:28.287 START TEST no_shrink_alloc 00:04:28.287 ************************************ 00:04:28.287 14:39:02 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:04:28.287 14:39:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:28.287 14:39:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:28.287 14:39:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:28.287 14:39:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:28.287 14:39:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:28.287 14:39:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:28.287 14:39:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:28.287 14:39:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:28.287 14:39:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:28.287 14:39:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:28.287 14:39:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:28.287 14:39:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:28.287 14:39:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:28.287 14:39:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:28.287 14:39:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:28.287 14:39:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:28.287 14:39:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:28.287 14:39:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:28.287 14:39:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:28.287 14:39:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:28.287 14:39:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:28.287 14:39:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:30.818 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:30.818 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:30.818 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:30.818 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:30.818 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:30.818 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:30.818 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:30.818 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:30.818 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:30.818 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:30.818 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:30.818 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:30.818 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:30.818 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:30.818 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:30.818 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:30.818 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:30.818 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:30.818 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:30.818 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:30.818 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:30.818 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:30.818 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:30.818 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:30.818 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:30.818 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:30.818 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:30.818 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:30.818 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:30.818 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:30.818 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.818 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.818 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.818 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.818 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.818 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.818 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.819 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 175278620 kB' 'MemAvailable: 178201924 kB' 'Buffers: 4132 kB' 'Cached: 9943716 kB' 'SwapCached: 0 kB' 'Active: 7067708 kB' 'Inactive: 3521696 kB' 'Active(anon): 6637860 kB' 'Inactive(anon): 0 kB' 'Active(file): 429848 kB' 'Inactive(file): 3521696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 644736 kB' 'Mapped: 214676 kB' 'Shmem: 5996304 kB' 'KReclaimable: 231948 kB' 'Slab: 796764 kB' 'SReclaimable: 231948 kB' 'SUnreclaim: 564816 kB' 'KernelStack: 20944 kB' 'PageTables: 9504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 8182892 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316304 kB' 'VmallocChunk: 0 kB' 'Percpu: 72960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2479060 kB' 'DirectMap2M: 39143424 kB' 'DirectMap1G: 160432128 kB' 00:04:30.819 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.819 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.819 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.819 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.819 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.819 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.819 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.819 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.819 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.819 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.819 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.819 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.819 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.819 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.819 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.819 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.819 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.819 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.819 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.819 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.819 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.819 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.819 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.819 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.819 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.819 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.819 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.819 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.819 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.819 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.819 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.819 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.819 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.819 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.819 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.819 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.819 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.819 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.819 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.819 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.819 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.819 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.819 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.819 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.819 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.819 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.819 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.819 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.819 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.819 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.819 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.819 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.819 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.819 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.819 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.819 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.819 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.819 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.819 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.819 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.819 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.819 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.819 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.819 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.819 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.819 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.819 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.819 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.819 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.819 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.819 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.084 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.084 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.084 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.084 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.084 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.084 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.084 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.084 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.084 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.084 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.084 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.084 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.084 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.084 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.084 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.084 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.084 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.084 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.084 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.084 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.084 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.084 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.084 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.084 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.084 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.084 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.084 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 175278944 kB' 'MemAvailable: 178202248 kB' 'Buffers: 4132 kB' 'Cached: 9943720 kB' 'SwapCached: 0 kB' 'Active: 7067660 kB' 'Inactive: 3521696 kB' 'Active(anon): 6637812 kB' 'Inactive(anon): 0 kB' 'Active(file): 429848 kB' 'Inactive(file): 3521696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 644688 kB' 'Mapped: 214616 kB' 'Shmem: 5996308 kB' 'KReclaimable: 231948 kB' 'Slab: 796736 kB' 'SReclaimable: 231948 kB' 'SUnreclaim: 564788 kB' 'KernelStack: 20960 kB' 'PageTables: 9544 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 8182912 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316288 kB' 'VmallocChunk: 0 kB' 'Percpu: 72960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2479060 kB' 'DirectMap2M: 39143424 kB' 'DirectMap1G: 160432128 kB' 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.085 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.086 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 175279356 kB' 'MemAvailable: 178202660 kB' 'Buffers: 4132 kB' 'Cached: 9943736 kB' 'SwapCached: 0 kB' 'Active: 7067672 kB' 'Inactive: 3521696 kB' 'Active(anon): 6637824 kB' 'Inactive(anon): 0 kB' 'Active(file): 429848 kB' 'Inactive(file): 3521696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 644704 kB' 'Mapped: 214556 kB' 'Shmem: 5996324 kB' 'KReclaimable: 231948 kB' 'Slab: 796792 kB' 'SReclaimable: 231948 kB' 'SUnreclaim: 564844 kB' 'KernelStack: 20928 kB' 'PageTables: 9460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 8184052 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316304 kB' 'VmallocChunk: 0 kB' 'Percpu: 72960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2479060 kB' 'DirectMap2M: 39143424 kB' 'DirectMap1G: 160432128 kB' 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.087 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.088 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.089 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.089 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.089 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.089 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.089 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.089 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.089 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.089 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.089 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.089 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.089 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.089 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.089 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.089 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.089 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.089 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:31.089 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:31.089 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:31.089 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:31.089 nr_hugepages=1024 00:04:31.089 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:31.089 resv_hugepages=0 00:04:31.089 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:31.089 surplus_hugepages=0 00:04:31.089 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:31.089 anon_hugepages=0 00:04:31.089 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:31.089 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:31.089 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:31.089 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:31.089 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:31.089 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:31.089 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:31.089 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.089 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:31.089 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:31.089 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.089 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.089 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.089 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.089 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 175280352 kB' 'MemAvailable: 178203656 kB' 'Buffers: 4132 kB' 'Cached: 9943772 kB' 'SwapCached: 0 kB' 'Active: 7067900 kB' 'Inactive: 3521696 kB' 'Active(anon): 6638052 kB' 'Inactive(anon): 0 kB' 'Active(file): 429848 kB' 'Inactive(file): 3521696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 644948 kB' 'Mapped: 214568 kB' 'Shmem: 5996360 kB' 'KReclaimable: 231948 kB' 'Slab: 796792 kB' 'SReclaimable: 231948 kB' 'SUnreclaim: 564844 kB' 'KernelStack: 20944 kB' 'PageTables: 9516 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 8184076 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316320 kB' 'VmallocChunk: 0 kB' 'Percpu: 72960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2479060 kB' 'DirectMap2M: 39143424 kB' 'DirectMap1G: 160432128 kB' 00:04:31.089 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.089 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.089 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.089 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.089 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.089 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.089 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.089 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.089 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.089 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.089 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.089 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.089 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.089 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.089 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.089 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.089 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.089 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.089 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.089 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.089 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.089 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.089 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.089 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.089 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.089 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.089 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.089 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.089 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.089 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.089 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.089 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.089 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.089 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.089 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.089 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.089 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.089 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.089 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.089 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.089 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.089 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.089 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.089 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.089 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.089 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.089 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.089 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.089 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.089 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.090 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 90764196 kB' 'MemUsed: 6851432 kB' 'SwapCached: 0 kB' 'Active: 3066576 kB' 'Inactive: 196544 kB' 'Active(anon): 2820092 kB' 'Inactive(anon): 0 kB' 'Active(file): 246484 kB' 'Inactive(file): 196544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2820868 kB' 'Mapped: 166692 kB' 'AnonPages: 445372 kB' 'Shmem: 2377840 kB' 'KernelStack: 13688 kB' 'PageTables: 6208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 105172 kB' 'Slab: 376624 kB' 'SReclaimable: 105172 kB' 'SUnreclaim: 271452 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.091 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.092 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.092 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.092 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.092 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.092 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.092 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.092 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.092 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.092 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.092 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.092 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.092 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.092 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.092 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.092 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.092 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.092 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.092 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.092 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.092 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.092 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.092 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.092 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.092 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.092 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.092 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.092 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.092 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.092 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.092 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.092 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.092 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.092 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.092 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.092 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.092 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.092 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.092 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.092 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.092 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.092 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.092 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.092 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.092 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.092 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.092 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.092 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.092 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.092 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.092 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.092 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.092 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.092 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.092 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.092 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.092 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.092 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.092 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.092 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.092 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.092 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.092 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.092 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.092 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.092 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.092 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.092 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.092 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:31.092 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:31.092 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:31.092 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:31.092 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:31.092 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:31.092 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:31.092 node0=1024 expecting 1024 00:04:31.092 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:31.092 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:31.092 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:31.092 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:31.092 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:31.092 14:39:04 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:33.728 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:33.728 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:33.728 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:33.728 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:33.728 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:33.728 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:33.728 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:33.728 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:33.728 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:33.728 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:33.728 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:33.728 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:33.728 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:33.728 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:33.728 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:33.728 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:33.728 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:33.728 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:33.991 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:33.991 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:33.991 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:33.991 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:33.991 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:33.991 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:33.991 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:33.991 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:33.991 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:33.991 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:33.991 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:33.991 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:33.991 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:33.991 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.991 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:33.991 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:33.991 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.991 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.991 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.991 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.991 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 175303280 kB' 'MemAvailable: 178226584 kB' 'Buffers: 4132 kB' 'Cached: 9943856 kB' 'SwapCached: 0 kB' 'Active: 7069212 kB' 'Inactive: 3521696 kB' 'Active(anon): 6639364 kB' 'Inactive(anon): 0 kB' 'Active(file): 429848 kB' 'Inactive(file): 3521696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 646200 kB' 'Mapped: 214624 kB' 'Shmem: 5996444 kB' 'KReclaimable: 231948 kB' 'Slab: 796876 kB' 'SReclaimable: 231948 kB' 'SUnreclaim: 564928 kB' 'KernelStack: 21088 kB' 'PageTables: 9888 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 8188440 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316608 kB' 'VmallocChunk: 0 kB' 'Percpu: 72960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2479060 kB' 'DirectMap2M: 39143424 kB' 'DirectMap1G: 160432128 kB' 00:04:33.991 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.991 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.991 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.991 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.991 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.991 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.991 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.991 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.991 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.991 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.991 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.991 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.991 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.991 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.991 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.991 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.991 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.991 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.991 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.991 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.991 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.991 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.991 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.991 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.991 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.991 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.991 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.992 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 175317924 kB' 'MemAvailable: 178241228 kB' 'Buffers: 4132 kB' 'Cached: 9943856 kB' 'SwapCached: 0 kB' 'Active: 7064164 kB' 'Inactive: 3521696 kB' 'Active(anon): 6634316 kB' 'Inactive(anon): 0 kB' 'Active(file): 429848 kB' 'Inactive(file): 3521696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 641116 kB' 'Mapped: 214632 kB' 'Shmem: 5996444 kB' 'KReclaimable: 231948 kB' 'Slab: 796952 kB' 'SReclaimable: 231948 kB' 'SUnreclaim: 565004 kB' 'KernelStack: 21200 kB' 'PageTables: 10500 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 8181412 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316460 kB' 'VmallocChunk: 0 kB' 'Percpu: 72960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2479060 kB' 'DirectMap2M: 39143424 kB' 'DirectMap1G: 160432128 kB' 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.993 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:33.994 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 175319376 kB' 'MemAvailable: 178242680 kB' 'Buffers: 4132 kB' 'Cached: 9943880 kB' 'SwapCached: 0 kB' 'Active: 7068012 kB' 'Inactive: 3521696 kB' 'Active(anon): 6638164 kB' 'Inactive(anon): 0 kB' 'Active(file): 429848 kB' 'Inactive(file): 3521696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 644924 kB' 'Mapped: 214152 kB' 'Shmem: 5996468 kB' 'KReclaimable: 231948 kB' 'Slab: 797104 kB' 'SReclaimable: 231948 kB' 'SUnreclaim: 565156 kB' 'KernelStack: 20944 kB' 'PageTables: 9308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 8184708 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316352 kB' 'VmallocChunk: 0 kB' 'Percpu: 72960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2479060 kB' 'DirectMap2M: 39143424 kB' 'DirectMap1G: 160432128 kB' 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.995 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.996 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:33.997 nr_hugepages=1024 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:33.997 resv_hugepages=0 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:33.997 surplus_hugepages=0 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:33.997 anon_hugepages=0 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 175317624 kB' 'MemAvailable: 178240928 kB' 'Buffers: 4132 kB' 'Cached: 9943904 kB' 'SwapCached: 0 kB' 'Active: 7068436 kB' 'Inactive: 3521696 kB' 'Active(anon): 6638588 kB' 'Inactive(anon): 0 kB' 'Active(file): 429848 kB' 'Inactive(file): 3521696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 645364 kB' 'Mapped: 214448 kB' 'Shmem: 5996492 kB' 'KReclaimable: 231948 kB' 'Slab: 797104 kB' 'SReclaimable: 231948 kB' 'SUnreclaim: 565156 kB' 'KernelStack: 21120 kB' 'PageTables: 9716 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 8186220 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316496 kB' 'VmallocChunk: 0 kB' 'Percpu: 72960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2479060 kB' 'DirectMap2M: 39143424 kB' 'DirectMap1G: 160432128 kB' 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.997 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:33.998 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 90792084 kB' 'MemUsed: 6823544 kB' 'SwapCached: 0 kB' 'Active: 3067708 kB' 'Inactive: 196544 kB' 'Active(anon): 2821224 kB' 'Inactive(anon): 0 kB' 'Active(file): 246484 kB' 'Inactive(file): 196544 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2820932 kB' 'Mapped: 166572 kB' 'AnonPages: 446500 kB' 'Shmem: 2377904 kB' 'KernelStack: 13736 kB' 'PageTables: 6356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 105172 kB' 'Slab: 376764 kB' 'SReclaimable: 105172 kB' 'SUnreclaim: 271592 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:33.999 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.000 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.000 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.000 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.000 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.000 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.000 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.000 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.000 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.000 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.000 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.000 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.000 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.000 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.000 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.000 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.000 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.000 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.000 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.000 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.000 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.000 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.000 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.000 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.000 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.000 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.000 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.000 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.000 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.000 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.000 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.000 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:34.000 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.000 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.000 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.000 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:34.000 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:34.000 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:34.000 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:34.000 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:34.000 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:34.000 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:34.000 node0=1024 expecting 1024 00:04:34.000 14:39:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:34.000 00:04:34.000 real 0m5.813s 00:04:34.000 user 0m2.373s 00:04:34.000 sys 0m3.569s 00:04:34.000 14:39:07 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:34.000 14:39:07 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:34.000 ************************************ 00:04:34.000 END TEST no_shrink_alloc 00:04:34.000 ************************************ 00:04:34.000 14:39:07 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:34.000 14:39:07 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:34.000 14:39:07 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:34.000 14:39:07 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:34.000 14:39:07 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:34.000 14:39:07 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:34.000 14:39:07 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:34.000 14:39:07 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:34.000 14:39:07 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:34.000 14:39:07 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:34.000 14:39:07 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:34.000 14:39:07 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:34.000 14:39:07 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:34.000 14:39:07 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:34.000 14:39:07 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:34.000 00:04:34.000 real 0m21.922s 00:04:34.000 user 0m8.263s 00:04:34.000 sys 0m12.640s 00:04:34.000 14:39:07 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:34.000 14:39:07 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:34.000 ************************************ 00:04:34.000 END TEST hugepages 00:04:34.000 ************************************ 00:04:34.000 14:39:07 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:34.000 14:39:07 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/driver.sh 00:04:34.000 14:39:07 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:34.000 14:39:07 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:34.000 14:39:07 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:34.258 ************************************ 00:04:34.258 START TEST driver 00:04:34.258 ************************************ 00:04:34.258 14:39:07 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/driver.sh 00:04:34.258 * Looking for test storage... 00:04:34.258 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:04:34.258 14:39:07 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:34.258 14:39:07 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:34.258 14:39:07 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:38.441 14:39:11 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:38.441 14:39:11 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:38.441 14:39:11 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:38.441 14:39:11 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:38.441 ************************************ 00:04:38.441 START TEST guess_driver 00:04:38.441 ************************************ 00:04:38.441 14:39:12 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:04:38.441 14:39:12 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:38.441 14:39:12 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:38.441 14:39:12 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:38.441 14:39:12 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:38.441 14:39:12 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:38.441 14:39:12 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:38.441 14:39:12 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:38.441 14:39:12 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:38.441 14:39:12 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:38.441 14:39:12 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 174 > 0 )) 00:04:38.441 14:39:12 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:38.441 14:39:12 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:38.441 14:39:12 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:38.441 14:39:12 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:38.441 14:39:12 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:38.441 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:38.441 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:38.441 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:38.441 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:38.441 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:38.441 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:38.441 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:38.441 14:39:12 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:38.441 14:39:12 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:38.441 14:39:12 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:38.441 14:39:12 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:38.441 14:39:12 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:38.441 Looking for driver=vfio-pci 00:04:38.441 14:39:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:38.441 14:39:12 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:38.441 14:39:12 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:38.441 14:39:12 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:40.964 14:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:40.964 14:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:40.964 14:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:40.964 14:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:40.964 14:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:40.964 14:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:40.964 14:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:40.964 14:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:40.964 14:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:40.964 14:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:40.964 14:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:40.964 14:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:40.964 14:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:40.964 14:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:40.964 14:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:40.964 14:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:40.964 14:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:40.964 14:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:40.964 14:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:40.964 14:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:40.964 14:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:40.964 14:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:40.964 14:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:40.964 14:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:40.964 14:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:40.964 14:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:40.964 14:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:40.964 14:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:40.964 14:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:40.964 14:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:40.964 14:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:40.964 14:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:40.964 14:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:40.964 14:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:40.964 14:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:40.964 14:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:40.964 14:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:40.964 14:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:40.964 14:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:40.964 14:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:40.964 14:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:40.964 14:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:40.964 14:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:40.964 14:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:40.964 14:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:40.964 14:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:40.964 14:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:40.964 14:39:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:42.338 14:39:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:42.338 14:39:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:42.338 14:39:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:42.338 14:39:16 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:42.338 14:39:16 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:42.338 14:39:16 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:42.338 14:39:16 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:45.621 00:04:45.621 real 0m7.503s 00:04:45.621 user 0m1.814s 00:04:45.621 sys 0m3.478s 00:04:45.621 14:39:19 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:45.621 14:39:19 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:45.621 ************************************ 00:04:45.621 END TEST guess_driver 00:04:45.621 ************************************ 00:04:45.880 14:39:19 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:04:45.880 00:04:45.880 real 0m11.642s 00:04:45.880 user 0m3.016s 00:04:45.880 sys 0m5.669s 00:04:45.880 14:39:19 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:45.880 14:39:19 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:45.880 ************************************ 00:04:45.880 END TEST driver 00:04:45.880 ************************************ 00:04:45.880 14:39:19 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:45.880 14:39:19 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/devices.sh 00:04:45.880 14:39:19 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:45.880 14:39:19 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:45.880 14:39:19 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:45.880 ************************************ 00:04:45.880 START TEST devices 00:04:45.880 ************************************ 00:04:45.880 14:39:19 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/devices.sh 00:04:45.880 * Looking for test storage... 00:04:45.880 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:04:45.880 14:39:19 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:45.880 14:39:19 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:45.880 14:39:19 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:45.880 14:39:19 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:49.167 14:39:22 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:49.167 14:39:22 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:49.167 14:39:22 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:49.167 14:39:22 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:49.167 14:39:22 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:49.167 14:39:22 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:49.167 14:39:22 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:49.167 14:39:22 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:49.167 14:39:22 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:49.167 14:39:22 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:49.167 14:39:22 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:49.167 14:39:22 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:49.167 14:39:22 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:49.167 14:39:22 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:49.167 14:39:22 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:49.167 14:39:22 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:49.167 14:39:22 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:49.167 14:39:22 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:5f:00.0 00:04:49.167 14:39:22 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\5\f\:\0\0\.\0* ]] 00:04:49.167 14:39:22 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:49.167 14:39:22 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:49.167 14:39:22 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:49.167 No valid GPT data, bailing 00:04:49.167 14:39:22 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:49.167 14:39:22 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:49.167 14:39:22 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:49.167 14:39:22 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:49.167 14:39:22 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:49.167 14:39:22 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:49.167 14:39:22 setup.sh.devices -- setup/common.sh@80 -- # echo 1600321314816 00:04:49.167 14:39:22 setup.sh.devices -- setup/devices.sh@204 -- # (( 1600321314816 >= min_disk_size )) 00:04:49.167 14:39:22 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:49.167 14:39:22 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:5f:00.0 00:04:49.167 14:39:22 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:49.167 14:39:22 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:49.167 14:39:22 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:49.167 14:39:22 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:49.167 14:39:22 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.167 14:39:22 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:49.167 ************************************ 00:04:49.167 START TEST nvme_mount 00:04:49.167 ************************************ 00:04:49.167 14:39:22 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:04:49.167 14:39:22 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:49.167 14:39:22 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:49.167 14:39:22 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:49.167 14:39:22 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:49.167 14:39:22 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:49.167 14:39:22 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:49.167 14:39:22 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:49.167 14:39:22 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:49.167 14:39:22 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:49.167 14:39:22 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:49.167 14:39:22 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:49.167 14:39:22 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:49.167 14:39:22 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:49.167 14:39:22 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:49.167 14:39:22 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:49.167 14:39:22 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:49.167 14:39:22 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:49.167 14:39:22 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:49.168 14:39:22 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:50.104 Creating new GPT entries in memory. 00:04:50.104 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:50.104 other utilities. 00:04:50.104 14:39:23 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:50.104 14:39:23 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:50.104 14:39:23 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:50.104 14:39:23 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:50.104 14:39:23 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:51.041 Creating new GPT entries in memory. 00:04:51.041 The operation has completed successfully. 00:04:51.041 14:39:24 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:51.041 14:39:24 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:51.041 14:39:24 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 2657303 00:04:51.041 14:39:24 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:51.041 14:39:24 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:51.041 14:39:24 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:51.041 14:39:24 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:51.041 14:39:24 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:51.041 14:39:24 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:51.041 14:39:24 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:5f:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:51.041 14:39:24 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5f:00.0 00:04:51.041 14:39:24 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:51.041 14:39:24 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:51.041 14:39:24 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:51.041 14:39:24 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:51.041 14:39:24 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:51.041 14:39:24 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:51.041 14:39:24 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:51.041 14:39:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.042 14:39:24 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5f:00.0 00:04:51.042 14:39:24 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:51.042 14:39:24 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:51.042 14:39:24 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:53.567 14:39:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:53.567 14:39:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:53.567 14:39:27 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:53.567 14:39:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.567 14:39:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:53.567 14:39:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.567 14:39:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:53.567 14:39:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.567 14:39:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:53.567 14:39:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.567 14:39:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:53.567 14:39:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.567 14:39:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:53.567 14:39:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.567 14:39:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:53.567 14:39:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.567 14:39:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:53.567 14:39:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.567 14:39:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:53.567 14:39:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.567 14:39:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:53.567 14:39:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.568 14:39:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:53.568 14:39:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.568 14:39:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:53.568 14:39:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.568 14:39:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:53.568 14:39:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.568 14:39:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:53.568 14:39:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.568 14:39:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:53.568 14:39:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.568 14:39:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:53.568 14:39:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.568 14:39:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:53.568 14:39:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.568 14:39:27 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:53.568 14:39:27 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:53.568 14:39:27 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:53.568 14:39:27 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:53.568 14:39:27 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:53.568 14:39:27 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:53.568 14:39:27 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:53.568 14:39:27 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:53.568 14:39:27 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:53.568 14:39:27 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:53.568 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:53.568 14:39:27 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:53.568 14:39:27 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:53.826 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:53.826 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:04:53.826 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:53.826 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:53.826 14:39:27 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:53.826 14:39:27 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:53.826 14:39:27 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:53.826 14:39:27 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:53.826 14:39:27 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:53.826 14:39:27 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:53.826 14:39:27 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:5f:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:53.826 14:39:27 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5f:00.0 00:04:53.826 14:39:27 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:53.826 14:39:27 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:53.826 14:39:27 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:53.826 14:39:27 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:53.826 14:39:27 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:53.826 14:39:27 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:53.826 14:39:27 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:53.826 14:39:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.826 14:39:27 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5f:00.0 00:04:53.826 14:39:27 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:53.826 14:39:27 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:53.826 14:39:27 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:56.357 14:39:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:56.357 14:39:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:56.357 14:39:30 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:56.357 14:39:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.357 14:39:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:56.357 14:39:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.357 14:39:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:56.357 14:39:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.357 14:39:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:56.357 14:39:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.357 14:39:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:56.357 14:39:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.357 14:39:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:56.357 14:39:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.357 14:39:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:56.357 14:39:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.357 14:39:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:56.357 14:39:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.357 14:39:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:56.357 14:39:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.357 14:39:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:56.357 14:39:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.357 14:39:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:56.357 14:39:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.357 14:39:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:56.357 14:39:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.357 14:39:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:56.357 14:39:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.357 14:39:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:56.357 14:39:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.357 14:39:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:56.357 14:39:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.357 14:39:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:56.357 14:39:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.357 14:39:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:56.357 14:39:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.357 14:39:30 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:56.357 14:39:30 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:56.357 14:39:30 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:56.357 14:39:30 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:56.357 14:39:30 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:56.357 14:39:30 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:56.357 14:39:30 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:5f:00.0 data@nvme0n1 '' '' 00:04:56.357 14:39:30 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5f:00.0 00:04:56.357 14:39:30 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:56.357 14:39:30 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:56.357 14:39:30 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:56.357 14:39:30 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:56.357 14:39:30 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:56.357 14:39:30 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:56.357 14:39:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.357 14:39:30 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5f:00.0 00:04:56.357 14:39:30 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:56.357 14:39:30 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:56.357 14:39:30 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:58.887 14:39:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:58.887 14:39:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:58.887 14:39:32 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:58.887 14:39:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.887 14:39:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:58.887 14:39:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.887 14:39:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:58.887 14:39:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.887 14:39:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:58.887 14:39:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.887 14:39:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:58.887 14:39:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.887 14:39:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:58.887 14:39:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.887 14:39:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:58.887 14:39:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.887 14:39:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:58.887 14:39:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.887 14:39:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:58.887 14:39:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.887 14:39:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:58.887 14:39:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.888 14:39:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:58.888 14:39:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.888 14:39:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:58.888 14:39:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.888 14:39:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:58.888 14:39:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.888 14:39:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:58.888 14:39:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.888 14:39:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:58.888 14:39:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.888 14:39:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:58.888 14:39:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.888 14:39:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:58.888 14:39:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.888 14:39:32 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:58.888 14:39:32 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:58.888 14:39:32 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:58.888 14:39:32 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:58.888 14:39:32 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:58.888 14:39:32 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:58.888 14:39:32 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:58.888 14:39:32 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:58.888 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:58.888 00:04:58.888 real 0m9.915s 00:04:58.888 user 0m2.734s 00:04:58.888 sys 0m4.926s 00:04:58.888 14:39:32 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:58.888 14:39:32 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:58.888 ************************************ 00:04:58.888 END TEST nvme_mount 00:04:58.888 ************************************ 00:04:58.888 14:39:32 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:58.888 14:39:32 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:58.888 14:39:32 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:58.888 14:39:32 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.888 14:39:32 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:58.888 ************************************ 00:04:58.888 START TEST dm_mount 00:04:58.888 ************************************ 00:04:58.888 14:39:32 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:58.888 14:39:32 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:58.888 14:39:32 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:58.888 14:39:32 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:58.888 14:39:32 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:58.888 14:39:32 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:58.888 14:39:32 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:58.888 14:39:32 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:58.888 14:39:32 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:58.888 14:39:32 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:58.888 14:39:32 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:58.888 14:39:32 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:58.888 14:39:32 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:58.888 14:39:32 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:58.888 14:39:32 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:58.888 14:39:32 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:58.888 14:39:32 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:58.888 14:39:32 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:58.888 14:39:32 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:58.888 14:39:32 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:58.888 14:39:32 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:58.888 14:39:32 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:00.268 Creating new GPT entries in memory. 00:05:00.268 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:00.268 other utilities. 00:05:00.268 14:39:33 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:00.268 14:39:33 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:00.268 14:39:33 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:00.268 14:39:33 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:00.268 14:39:33 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:01.206 Creating new GPT entries in memory. 00:05:01.206 The operation has completed successfully. 00:05:01.206 14:39:34 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:01.206 14:39:34 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:01.206 14:39:34 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:01.206 14:39:34 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:01.206 14:39:34 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:02.143 The operation has completed successfully. 00:05:02.143 14:39:35 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:02.143 14:39:35 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:02.143 14:39:35 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 2661314 00:05:02.143 14:39:35 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:02.143 14:39:35 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:02.143 14:39:35 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:02.143 14:39:35 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:02.143 14:39:35 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:02.143 14:39:35 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:02.143 14:39:35 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:02.143 14:39:35 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:02.143 14:39:35 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:02.143 14:39:35 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-2 00:05:02.143 14:39:35 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-2 00:05:02.143 14:39:35 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-2 ]] 00:05:02.143 14:39:35 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-2 ]] 00:05:02.143 14:39:35 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:02.143 14:39:35 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount size= 00:05:02.143 14:39:35 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:02.143 14:39:35 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:02.143 14:39:35 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:02.143 14:39:35 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:02.143 14:39:35 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:5f:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:02.143 14:39:35 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5f:00.0 00:05:02.143 14:39:35 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:02.143 14:39:35 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:02.143 14:39:35 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:02.143 14:39:35 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:02.143 14:39:35 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:02.143 14:39:35 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:02.143 14:39:35 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:02.143 14:39:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.143 14:39:35 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5f:00.0 00:05:02.143 14:39:35 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:02.143 14:39:35 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:02.143 14:39:35 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:05:04.704 14:39:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:04.704 14:39:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:04.704 14:39:38 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:04.704 14:39:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.704 14:39:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:04.704 14:39:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.704 14:39:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:04.704 14:39:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.704 14:39:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:04.704 14:39:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.704 14:39:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:04.705 14:39:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.705 14:39:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:04.705 14:39:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.705 14:39:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:04.705 14:39:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.705 14:39:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:04.705 14:39:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.705 14:39:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:04.705 14:39:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.705 14:39:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:04.705 14:39:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.705 14:39:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:04.705 14:39:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.705 14:39:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:04.705 14:39:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.705 14:39:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:04.705 14:39:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.705 14:39:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:04.705 14:39:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.705 14:39:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:04.705 14:39:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.705 14:39:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:04.705 14:39:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.705 14:39:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:04.705 14:39:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.963 14:39:38 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:04.963 14:39:38 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:04.963 14:39:38 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:04.963 14:39:38 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:04.963 14:39:38 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:04.963 14:39:38 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:04.963 14:39:38 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:5f:00.0 holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 '' '' 00:05:04.963 14:39:38 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5f:00.0 00:05:04.963 14:39:38 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 00:05:04.963 14:39:38 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:04.963 14:39:38 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:04.963 14:39:38 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:04.963 14:39:38 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:04.963 14:39:38 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:04.963 14:39:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.963 14:39:38 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5f:00.0 00:05:04.963 14:39:38 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:04.964 14:39:38 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:04.964 14:39:38 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:05:07.497 14:39:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:07.497 14:39:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\2\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\2* ]] 00:05:07.497 14:39:41 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:07.497 14:39:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.497 14:39:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:07.497 14:39:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.497 14:39:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:07.497 14:39:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.497 14:39:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:07.497 14:39:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.497 14:39:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:07.498 14:39:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.498 14:39:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:07.498 14:39:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.498 14:39:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:07.498 14:39:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.498 14:39:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:07.498 14:39:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.498 14:39:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:07.498 14:39:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.498 14:39:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:07.498 14:39:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.498 14:39:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:07.498 14:39:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.498 14:39:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:07.498 14:39:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.498 14:39:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:07.498 14:39:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.498 14:39:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:07.498 14:39:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.498 14:39:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:07.498 14:39:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.498 14:39:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:07.498 14:39:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.498 14:39:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:05:07.498 14:39:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.757 14:39:41 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:07.757 14:39:41 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:07.757 14:39:41 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:07.757 14:39:41 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:07.757 14:39:41 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:07.757 14:39:41 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:07.757 14:39:41 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:07.757 14:39:41 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:07.757 14:39:41 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:07.757 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:07.757 14:39:41 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:07.757 14:39:41 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:07.757 00:05:07.757 real 0m8.811s 00:05:07.757 user 0m2.147s 00:05:07.757 sys 0m3.692s 00:05:07.757 14:39:41 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:07.757 14:39:41 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:07.757 ************************************ 00:05:07.757 END TEST dm_mount 00:05:07.757 ************************************ 00:05:07.757 14:39:41 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:07.757 14:39:41 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:07.757 14:39:41 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:07.757 14:39:41 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:07.757 14:39:41 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:07.757 14:39:41 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:07.757 14:39:41 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:07.757 14:39:41 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:08.017 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:08.017 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:05:08.017 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:08.017 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:08.017 14:39:41 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:08.017 14:39:41 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:08.017 14:39:41 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:08.017 14:39:41 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:08.017 14:39:41 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:08.017 14:39:41 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:08.017 14:39:41 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:08.017 00:05:08.017 real 0m22.244s 00:05:08.017 user 0m6.108s 00:05:08.017 sys 0m10.722s 00:05:08.017 14:39:41 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:08.017 14:39:41 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:08.017 ************************************ 00:05:08.017 END TEST devices 00:05:08.017 ************************************ 00:05:08.017 14:39:41 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:08.017 00:05:08.017 real 1m16.036s 00:05:08.017 user 0m23.955s 00:05:08.017 sys 0m40.767s 00:05:08.017 14:39:41 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:08.017 14:39:41 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:08.017 ************************************ 00:05:08.017 END TEST setup.sh 00:05:08.017 ************************************ 00:05:08.276 14:39:41 -- common/autotest_common.sh@1142 -- # return 0 00:05:08.276 14:39:41 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:05:10.811 Hugepages 00:05:10.811 node hugesize free / total 00:05:10.811 node0 1048576kB 0 / 0 00:05:10.811 node0 2048kB 2048 / 2048 00:05:10.811 node1 1048576kB 0 / 0 00:05:10.811 node1 2048kB 0 / 0 00:05:10.811 00:05:10.811 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:10.811 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:05:10.811 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:05:10.811 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:05:10.811 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:05:10.811 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:05:10.811 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:05:10.811 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:05:10.811 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:05:10.811 NVMe 0000:5f:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:05:10.811 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:05:10.811 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:05:10.811 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:05:10.811 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:05:10.811 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:05:10.811 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:05:10.811 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:05:10.811 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:05:10.811 14:39:44 -- spdk/autotest.sh@130 -- # uname -s 00:05:10.811 14:39:44 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:10.811 14:39:44 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:10.811 14:39:44 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:05:13.347 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:13.347 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:13.347 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:13.606 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:13.606 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:13.606 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:13.606 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:13.606 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:13.606 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:13.606 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:13.606 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:13.606 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:13.606 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:13.606 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:13.606 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:13.606 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:14.985 0000:5f:00.0 (8086 0a54): nvme -> vfio-pci 00:05:15.244 14:39:48 -- common/autotest_common.sh@1532 -- # sleep 1 00:05:16.180 14:39:49 -- common/autotest_common.sh@1533 -- # bdfs=() 00:05:16.180 14:39:49 -- common/autotest_common.sh@1533 -- # local bdfs 00:05:16.180 14:39:49 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:05:16.180 14:39:49 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:05:16.180 14:39:49 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:16.180 14:39:49 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:16.180 14:39:49 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:16.180 14:39:49 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:16.180 14:39:49 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:16.180 14:39:50 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:16.180 14:39:50 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5f:00.0 00:05:16.180 14:39:50 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:05:18.781 Waiting for block devices as requested 00:05:18.781 0000:5f:00.0 (8086 0a54): vfio-pci -> nvme 00:05:18.781 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:19.093 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:19.093 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:19.093 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:19.093 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:19.351 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:19.351 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:19.351 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:19.351 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:19.609 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:19.609 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:19.609 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:19.609 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:19.868 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:19.868 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:19.868 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:20.126 14:39:53 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:20.126 14:39:53 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:5f:00.0 00:05:20.126 14:39:53 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:05:20.126 14:39:53 -- common/autotest_common.sh@1502 -- # grep 0000:5f:00.0/nvme/nvme 00:05:20.126 14:39:53 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:03.0/0000:5f:00.0/nvme/nvme0 00:05:20.126 14:39:53 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:03.0/0000:5f:00.0/nvme/nvme0 ]] 00:05:20.126 14:39:53 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:5d/0000:5d:03.0/0000:5f:00.0/nvme/nvme0 00:05:20.126 14:39:53 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:05:20.126 14:39:53 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:05:20.126 14:39:53 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:05:20.126 14:39:53 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:05:20.126 14:39:53 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:20.126 14:39:53 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:20.126 14:39:53 -- common/autotest_common.sh@1545 -- # oacs=' 0xe' 00:05:20.126 14:39:53 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:20.126 14:39:53 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:20.126 14:39:53 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:05:20.126 14:39:53 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:20.126 14:39:53 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:20.126 14:39:53 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:20.126 14:39:53 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:20.126 14:39:53 -- common/autotest_common.sh@1557 -- # continue 00:05:20.126 14:39:53 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:20.126 14:39:53 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:20.126 14:39:53 -- common/autotest_common.sh@10 -- # set +x 00:05:20.127 14:39:53 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:20.127 14:39:53 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:20.127 14:39:53 -- common/autotest_common.sh@10 -- # set +x 00:05:20.127 14:39:53 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:05:22.657 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:22.657 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:22.657 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:22.657 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:22.657 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:22.657 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:22.657 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:22.657 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:22.657 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:22.657 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:22.657 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:22.657 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:22.657 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:22.657 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:22.657 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:22.657 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:24.034 0000:5f:00.0 (8086 0a54): nvme -> vfio-pci 00:05:24.291 14:39:58 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:24.291 14:39:58 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:24.291 14:39:58 -- common/autotest_common.sh@10 -- # set +x 00:05:24.291 14:39:58 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:24.291 14:39:58 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:24.291 14:39:58 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:24.291 14:39:58 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:24.291 14:39:58 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:24.291 14:39:58 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:24.291 14:39:58 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:24.291 14:39:58 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:24.291 14:39:58 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:24.291 14:39:58 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:24.291 14:39:58 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:24.291 14:39:58 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:24.291 14:39:58 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5f:00.0 00:05:24.291 14:39:58 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:24.291 14:39:58 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:5f:00.0/device 00:05:24.291 14:39:58 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:05:24.291 14:39:58 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:24.291 14:39:58 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:05:24.291 14:39:58 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:5f:00.0 00:05:24.291 14:39:58 -- common/autotest_common.sh@1592 -- # [[ -z 0000:5f:00.0 ]] 00:05:24.291 14:39:58 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=2670283 00:05:24.291 14:39:58 -- common/autotest_common.sh@1598 -- # waitforlisten 2670283 00:05:24.291 14:39:58 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:24.291 14:39:58 -- common/autotest_common.sh@829 -- # '[' -z 2670283 ']' 00:05:24.291 14:39:58 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.291 14:39:58 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:24.291 14:39:58 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.291 14:39:58 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:24.291 14:39:58 -- common/autotest_common.sh@10 -- # set +x 00:05:24.291 [2024-07-15 14:39:58.161469] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:05:24.291 [2024-07-15 14:39:58.161517] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2670283 ] 00:05:24.291 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.548 [2024-07-15 14:39:58.215716] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.548 [2024-07-15 14:39:58.296221] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.136 14:39:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:25.136 14:39:58 -- common/autotest_common.sh@862 -- # return 0 00:05:25.136 14:39:58 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:05:25.136 14:39:58 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:05:25.136 14:39:58 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5f:00.0 00:05:28.418 nvme0n1 00:05:28.418 14:40:01 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:28.418 [2024-07-15 14:40:02.070049] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:05:28.418 request: 00:05:28.418 { 00:05:28.418 "nvme_ctrlr_name": "nvme0", 00:05:28.418 "password": "test", 00:05:28.418 "method": "bdev_nvme_opal_revert", 00:05:28.418 "req_id": 1 00:05:28.418 } 00:05:28.418 Got JSON-RPC error response 00:05:28.418 response: 00:05:28.418 { 00:05:28.418 "code": -32602, 00:05:28.418 "message": "Invalid parameters" 00:05:28.418 } 00:05:28.418 14:40:02 -- common/autotest_common.sh@1604 -- # true 00:05:28.418 14:40:02 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:05:28.418 14:40:02 -- common/autotest_common.sh@1608 -- # killprocess 2670283 00:05:28.418 14:40:02 -- common/autotest_common.sh@948 -- # '[' -z 2670283 ']' 00:05:28.418 14:40:02 -- common/autotest_common.sh@952 -- # kill -0 2670283 00:05:28.418 14:40:02 -- common/autotest_common.sh@953 -- # uname 00:05:28.418 14:40:02 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:28.418 14:40:02 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2670283 00:05:28.418 14:40:02 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:28.418 14:40:02 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:28.418 14:40:02 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2670283' 00:05:28.419 killing process with pid 2670283 00:05:28.419 14:40:02 -- common/autotest_common.sh@967 -- # kill 2670283 00:05:28.419 14:40:02 -- common/autotest_common.sh@972 -- # wait 2670283 00:05:30.952 14:40:04 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:30.952 14:40:04 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:30.952 14:40:04 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:30.952 14:40:04 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:30.952 14:40:04 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:30.952 14:40:04 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:30.952 14:40:04 -- common/autotest_common.sh@10 -- # set +x 00:05:30.952 14:40:04 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:30.952 14:40:04 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:05:30.952 14:40:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:30.952 14:40:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.952 14:40:04 -- common/autotest_common.sh@10 -- # set +x 00:05:30.952 ************************************ 00:05:30.952 START TEST env 00:05:30.952 ************************************ 00:05:30.952 14:40:04 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:05:30.952 * Looking for test storage... 00:05:30.952 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env 00:05:30.952 14:40:04 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:05:30.952 14:40:04 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:30.952 14:40:04 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.952 14:40:04 env -- common/autotest_common.sh@10 -- # set +x 00:05:30.952 ************************************ 00:05:30.952 START TEST env_memory 00:05:30.952 ************************************ 00:05:30.952 14:40:04 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:05:30.952 00:05:30.952 00:05:30.952 CUnit - A unit testing framework for C - Version 2.1-3 00:05:30.952 http://cunit.sourceforge.net/ 00:05:30.952 00:05:30.952 00:05:30.952 Suite: memory 00:05:30.952 Test: alloc and free memory map ...[2024-07-15 14:40:04.504991] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:30.952 passed 00:05:30.952 Test: mem map translation ...[2024-07-15 14:40:04.523614] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:30.952 [2024-07-15 14:40:04.523629] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:30.952 [2024-07-15 14:40:04.523663] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:30.952 [2024-07-15 14:40:04.523670] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:30.952 passed 00:05:30.952 Test: mem map registration ...[2024-07-15 14:40:04.559255] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:30.952 [2024-07-15 14:40:04.559269] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:30.952 passed 00:05:30.952 Test: mem map adjacent registrations ...passed 00:05:30.952 00:05:30.952 Run Summary: Type Total Ran Passed Failed Inactive 00:05:30.952 suites 1 1 n/a 0 0 00:05:30.952 tests 4 4 4 0 0 00:05:30.952 asserts 152 152 152 0 n/a 00:05:30.952 00:05:30.952 Elapsed time = 0.136 seconds 00:05:30.952 00:05:30.952 real 0m0.148s 00:05:30.952 user 0m0.141s 00:05:30.952 sys 0m0.006s 00:05:30.952 14:40:04 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:30.952 14:40:04 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:30.952 ************************************ 00:05:30.952 END TEST env_memory 00:05:30.952 ************************************ 00:05:30.952 14:40:04 env -- common/autotest_common.sh@1142 -- # return 0 00:05:30.952 14:40:04 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:30.952 14:40:04 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:30.952 14:40:04 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.952 14:40:04 env -- common/autotest_common.sh@10 -- # set +x 00:05:30.952 ************************************ 00:05:30.952 START TEST env_vtophys 00:05:30.952 ************************************ 00:05:30.952 14:40:04 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:30.952 EAL: lib.eal log level changed from notice to debug 00:05:30.952 EAL: Detected lcore 0 as core 0 on socket 0 00:05:30.952 EAL: Detected lcore 1 as core 1 on socket 0 00:05:30.952 EAL: Detected lcore 2 as core 2 on socket 0 00:05:30.952 EAL: Detected lcore 3 as core 3 on socket 0 00:05:30.952 EAL: Detected lcore 4 as core 4 on socket 0 00:05:30.952 EAL: Detected lcore 5 as core 5 on socket 0 00:05:30.952 EAL: Detected lcore 6 as core 6 on socket 0 00:05:30.952 EAL: Detected lcore 7 as core 9 on socket 0 00:05:30.952 EAL: Detected lcore 8 as core 10 on socket 0 00:05:30.952 EAL: Detected lcore 9 as core 11 on socket 0 00:05:30.952 EAL: Detected lcore 10 as core 12 on socket 0 00:05:30.952 EAL: Detected lcore 11 as core 13 on socket 0 00:05:30.952 EAL: Detected lcore 12 as core 16 on socket 0 00:05:30.952 EAL: Detected lcore 13 as core 17 on socket 0 00:05:30.952 EAL: Detected lcore 14 as core 18 on socket 0 00:05:30.952 EAL: Detected lcore 15 as core 19 on socket 0 00:05:30.952 EAL: Detected lcore 16 as core 20 on socket 0 00:05:30.952 EAL: Detected lcore 17 as core 21 on socket 0 00:05:30.952 EAL: Detected lcore 18 as core 24 on socket 0 00:05:30.952 EAL: Detected lcore 19 as core 25 on socket 0 00:05:30.952 EAL: Detected lcore 20 as core 26 on socket 0 00:05:30.952 EAL: Detected lcore 21 as core 27 on socket 0 00:05:30.952 EAL: Detected lcore 22 as core 28 on socket 0 00:05:30.952 EAL: Detected lcore 23 as core 29 on socket 0 00:05:30.952 EAL: Detected lcore 24 as core 0 on socket 1 00:05:30.952 EAL: Detected lcore 25 as core 1 on socket 1 00:05:30.952 EAL: Detected lcore 26 as core 2 on socket 1 00:05:30.952 EAL: Detected lcore 27 as core 3 on socket 1 00:05:30.952 EAL: Detected lcore 28 as core 4 on socket 1 00:05:30.952 EAL: Detected lcore 29 as core 5 on socket 1 00:05:30.952 EAL: Detected lcore 30 as core 6 on socket 1 00:05:30.952 EAL: Detected lcore 31 as core 8 on socket 1 00:05:30.952 EAL: Detected lcore 32 as core 9 on socket 1 00:05:30.952 EAL: Detected lcore 33 as core 10 on socket 1 00:05:30.952 EAL: Detected lcore 34 as core 11 on socket 1 00:05:30.952 EAL: Detected lcore 35 as core 12 on socket 1 00:05:30.952 EAL: Detected lcore 36 as core 13 on socket 1 00:05:30.952 EAL: Detected lcore 37 as core 16 on socket 1 00:05:30.952 EAL: Detected lcore 38 as core 17 on socket 1 00:05:30.952 EAL: Detected lcore 39 as core 18 on socket 1 00:05:30.952 EAL: Detected lcore 40 as core 19 on socket 1 00:05:30.952 EAL: Detected lcore 41 as core 20 on socket 1 00:05:30.952 EAL: Detected lcore 42 as core 21 on socket 1 00:05:30.952 EAL: Detected lcore 43 as core 25 on socket 1 00:05:30.952 EAL: Detected lcore 44 as core 26 on socket 1 00:05:30.952 EAL: Detected lcore 45 as core 27 on socket 1 00:05:30.952 EAL: Detected lcore 46 as core 28 on socket 1 00:05:30.952 EAL: Detected lcore 47 as core 29 on socket 1 00:05:30.952 EAL: Detected lcore 48 as core 0 on socket 0 00:05:30.952 EAL: Detected lcore 49 as core 1 on socket 0 00:05:30.952 EAL: Detected lcore 50 as core 2 on socket 0 00:05:30.952 EAL: Detected lcore 51 as core 3 on socket 0 00:05:30.952 EAL: Detected lcore 52 as core 4 on socket 0 00:05:30.952 EAL: Detected lcore 53 as core 5 on socket 0 00:05:30.952 EAL: Detected lcore 54 as core 6 on socket 0 00:05:30.952 EAL: Detected lcore 55 as core 9 on socket 0 00:05:30.952 EAL: Detected lcore 56 as core 10 on socket 0 00:05:30.952 EAL: Detected lcore 57 as core 11 on socket 0 00:05:30.952 EAL: Detected lcore 58 as core 12 on socket 0 00:05:30.952 EAL: Detected lcore 59 as core 13 on socket 0 00:05:30.952 EAL: Detected lcore 60 as core 16 on socket 0 00:05:30.952 EAL: Detected lcore 61 as core 17 on socket 0 00:05:30.952 EAL: Detected lcore 62 as core 18 on socket 0 00:05:30.952 EAL: Detected lcore 63 as core 19 on socket 0 00:05:30.952 EAL: Detected lcore 64 as core 20 on socket 0 00:05:30.952 EAL: Detected lcore 65 as core 21 on socket 0 00:05:30.952 EAL: Detected lcore 66 as core 24 on socket 0 00:05:30.952 EAL: Detected lcore 67 as core 25 on socket 0 00:05:30.952 EAL: Detected lcore 68 as core 26 on socket 0 00:05:30.952 EAL: Detected lcore 69 as core 27 on socket 0 00:05:30.952 EAL: Detected lcore 70 as core 28 on socket 0 00:05:30.952 EAL: Detected lcore 71 as core 29 on socket 0 00:05:30.952 EAL: Detected lcore 72 as core 0 on socket 1 00:05:30.952 EAL: Detected lcore 73 as core 1 on socket 1 00:05:30.952 EAL: Detected lcore 74 as core 2 on socket 1 00:05:30.952 EAL: Detected lcore 75 as core 3 on socket 1 00:05:30.952 EAL: Detected lcore 76 as core 4 on socket 1 00:05:30.952 EAL: Detected lcore 77 as core 5 on socket 1 00:05:30.952 EAL: Detected lcore 78 as core 6 on socket 1 00:05:30.952 EAL: Detected lcore 79 as core 8 on socket 1 00:05:30.952 EAL: Detected lcore 80 as core 9 on socket 1 00:05:30.952 EAL: Detected lcore 81 as core 10 on socket 1 00:05:30.952 EAL: Detected lcore 82 as core 11 on socket 1 00:05:30.952 EAL: Detected lcore 83 as core 12 on socket 1 00:05:30.952 EAL: Detected lcore 84 as core 13 on socket 1 00:05:30.952 EAL: Detected lcore 85 as core 16 on socket 1 00:05:30.952 EAL: Detected lcore 86 as core 17 on socket 1 00:05:30.952 EAL: Detected lcore 87 as core 18 on socket 1 00:05:30.952 EAL: Detected lcore 88 as core 19 on socket 1 00:05:30.952 EAL: Detected lcore 89 as core 20 on socket 1 00:05:30.952 EAL: Detected lcore 90 as core 21 on socket 1 00:05:30.952 EAL: Detected lcore 91 as core 25 on socket 1 00:05:30.952 EAL: Detected lcore 92 as core 26 on socket 1 00:05:30.952 EAL: Detected lcore 93 as core 27 on socket 1 00:05:30.952 EAL: Detected lcore 94 as core 28 on socket 1 00:05:30.952 EAL: Detected lcore 95 as core 29 on socket 1 00:05:30.952 EAL: Maximum logical cores by configuration: 128 00:05:30.953 EAL: Detected CPU lcores: 96 00:05:30.953 EAL: Detected NUMA nodes: 2 00:05:30.953 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:30.953 EAL: Detected shared linkage of DPDK 00:05:30.953 EAL: No shared files mode enabled, IPC will be disabled 00:05:30.953 EAL: Bus pci wants IOVA as 'DC' 00:05:30.953 EAL: Buses did not request a specific IOVA mode. 00:05:30.953 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:30.953 EAL: Selected IOVA mode 'VA' 00:05:30.953 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.953 EAL: Probing VFIO support... 00:05:30.953 EAL: IOMMU type 1 (Type 1) is supported 00:05:30.953 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:30.953 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:30.953 EAL: VFIO support initialized 00:05:30.953 EAL: Ask a virtual area of 0x2e000 bytes 00:05:30.953 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:30.953 EAL: Setting up physically contiguous memory... 00:05:30.953 EAL: Setting maximum number of open files to 524288 00:05:30.953 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:30.953 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:30.953 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:30.953 EAL: Ask a virtual area of 0x61000 bytes 00:05:30.953 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:30.953 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:30.953 EAL: Ask a virtual area of 0x400000000 bytes 00:05:30.953 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:30.953 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:30.953 EAL: Ask a virtual area of 0x61000 bytes 00:05:30.953 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:30.953 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:30.953 EAL: Ask a virtual area of 0x400000000 bytes 00:05:30.953 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:30.953 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:30.953 EAL: Ask a virtual area of 0x61000 bytes 00:05:30.953 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:30.953 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:30.953 EAL: Ask a virtual area of 0x400000000 bytes 00:05:30.953 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:30.953 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:30.953 EAL: Ask a virtual area of 0x61000 bytes 00:05:30.953 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:30.953 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:30.953 EAL: Ask a virtual area of 0x400000000 bytes 00:05:30.953 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:30.953 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:30.953 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:30.953 EAL: Ask a virtual area of 0x61000 bytes 00:05:30.953 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:30.953 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:30.953 EAL: Ask a virtual area of 0x400000000 bytes 00:05:30.953 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:30.953 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:30.953 EAL: Ask a virtual area of 0x61000 bytes 00:05:30.953 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:30.953 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:30.953 EAL: Ask a virtual area of 0x400000000 bytes 00:05:30.953 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:30.953 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:30.953 EAL: Ask a virtual area of 0x61000 bytes 00:05:30.953 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:30.953 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:30.953 EAL: Ask a virtual area of 0x400000000 bytes 00:05:30.953 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:30.953 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:30.953 EAL: Ask a virtual area of 0x61000 bytes 00:05:30.953 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:30.953 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:30.953 EAL: Ask a virtual area of 0x400000000 bytes 00:05:30.953 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:30.953 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:30.953 EAL: Hugepages will be freed exactly as allocated. 00:05:30.953 EAL: No shared files mode enabled, IPC is disabled 00:05:30.953 EAL: No shared files mode enabled, IPC is disabled 00:05:30.953 EAL: TSC frequency is ~2100000 KHz 00:05:30.953 EAL: Main lcore 0 is ready (tid=7f3231ccba00;cpuset=[0]) 00:05:30.953 EAL: Trying to obtain current memory policy. 00:05:30.953 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.953 EAL: Restoring previous memory policy: 0 00:05:30.953 EAL: request: mp_malloc_sync 00:05:30.953 EAL: No shared files mode enabled, IPC is disabled 00:05:30.953 EAL: Heap on socket 0 was expanded by 2MB 00:05:30.953 EAL: No shared files mode enabled, IPC is disabled 00:05:30.953 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:30.953 EAL: Mem event callback 'spdk:(nil)' registered 00:05:30.953 00:05:30.953 00:05:30.953 CUnit - A unit testing framework for C - Version 2.1-3 00:05:30.953 http://cunit.sourceforge.net/ 00:05:30.953 00:05:30.953 00:05:30.953 Suite: components_suite 00:05:30.953 Test: vtophys_malloc_test ...passed 00:05:30.953 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:30.953 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.953 EAL: Restoring previous memory policy: 4 00:05:30.953 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.953 EAL: request: mp_malloc_sync 00:05:30.953 EAL: No shared files mode enabled, IPC is disabled 00:05:30.953 EAL: Heap on socket 0 was expanded by 4MB 00:05:30.953 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.953 EAL: request: mp_malloc_sync 00:05:30.953 EAL: No shared files mode enabled, IPC is disabled 00:05:30.953 EAL: Heap on socket 0 was shrunk by 4MB 00:05:30.953 EAL: Trying to obtain current memory policy. 00:05:30.953 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.953 EAL: Restoring previous memory policy: 4 00:05:30.953 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.953 EAL: request: mp_malloc_sync 00:05:30.953 EAL: No shared files mode enabled, IPC is disabled 00:05:30.953 EAL: Heap on socket 0 was expanded by 6MB 00:05:30.953 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.953 EAL: request: mp_malloc_sync 00:05:30.953 EAL: No shared files mode enabled, IPC is disabled 00:05:30.953 EAL: Heap on socket 0 was shrunk by 6MB 00:05:30.953 EAL: Trying to obtain current memory policy. 00:05:30.953 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.953 EAL: Restoring previous memory policy: 4 00:05:30.953 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.953 EAL: request: mp_malloc_sync 00:05:30.953 EAL: No shared files mode enabled, IPC is disabled 00:05:30.953 EAL: Heap on socket 0 was expanded by 10MB 00:05:30.953 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.953 EAL: request: mp_malloc_sync 00:05:30.953 EAL: No shared files mode enabled, IPC is disabled 00:05:30.953 EAL: Heap on socket 0 was shrunk by 10MB 00:05:30.953 EAL: Trying to obtain current memory policy. 00:05:30.953 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.953 EAL: Restoring previous memory policy: 4 00:05:30.953 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.953 EAL: request: mp_malloc_sync 00:05:30.953 EAL: No shared files mode enabled, IPC is disabled 00:05:30.953 EAL: Heap on socket 0 was expanded by 18MB 00:05:30.953 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.953 EAL: request: mp_malloc_sync 00:05:30.953 EAL: No shared files mode enabled, IPC is disabled 00:05:30.953 EAL: Heap on socket 0 was shrunk by 18MB 00:05:30.953 EAL: Trying to obtain current memory policy. 00:05:30.953 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.953 EAL: Restoring previous memory policy: 4 00:05:30.953 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.953 EAL: request: mp_malloc_sync 00:05:30.953 EAL: No shared files mode enabled, IPC is disabled 00:05:30.953 EAL: Heap on socket 0 was expanded by 34MB 00:05:30.953 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.953 EAL: request: mp_malloc_sync 00:05:30.953 EAL: No shared files mode enabled, IPC is disabled 00:05:30.953 EAL: Heap on socket 0 was shrunk by 34MB 00:05:30.953 EAL: Trying to obtain current memory policy. 00:05:30.953 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.953 EAL: Restoring previous memory policy: 4 00:05:30.953 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.953 EAL: request: mp_malloc_sync 00:05:30.953 EAL: No shared files mode enabled, IPC is disabled 00:05:30.953 EAL: Heap on socket 0 was expanded by 66MB 00:05:30.953 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.953 EAL: request: mp_malloc_sync 00:05:30.953 EAL: No shared files mode enabled, IPC is disabled 00:05:30.953 EAL: Heap on socket 0 was shrunk by 66MB 00:05:30.953 EAL: Trying to obtain current memory policy. 00:05:30.953 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.953 EAL: Restoring previous memory policy: 4 00:05:30.953 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.953 EAL: request: mp_malloc_sync 00:05:30.953 EAL: No shared files mode enabled, IPC is disabled 00:05:30.953 EAL: Heap on socket 0 was expanded by 130MB 00:05:30.953 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.953 EAL: request: mp_malloc_sync 00:05:30.953 EAL: No shared files mode enabled, IPC is disabled 00:05:30.953 EAL: Heap on socket 0 was shrunk by 130MB 00:05:30.953 EAL: Trying to obtain current memory policy. 00:05:30.953 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:31.212 EAL: Restoring previous memory policy: 4 00:05:31.212 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.212 EAL: request: mp_malloc_sync 00:05:31.212 EAL: No shared files mode enabled, IPC is disabled 00:05:31.212 EAL: Heap on socket 0 was expanded by 258MB 00:05:31.212 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.212 EAL: request: mp_malloc_sync 00:05:31.212 EAL: No shared files mode enabled, IPC is disabled 00:05:31.212 EAL: Heap on socket 0 was shrunk by 258MB 00:05:31.212 EAL: Trying to obtain current memory policy. 00:05:31.212 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:31.212 EAL: Restoring previous memory policy: 4 00:05:31.212 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.212 EAL: request: mp_malloc_sync 00:05:31.212 EAL: No shared files mode enabled, IPC is disabled 00:05:31.212 EAL: Heap on socket 0 was expanded by 514MB 00:05:31.470 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.470 EAL: request: mp_malloc_sync 00:05:31.470 EAL: No shared files mode enabled, IPC is disabled 00:05:31.470 EAL: Heap on socket 0 was shrunk by 514MB 00:05:31.470 EAL: Trying to obtain current memory policy. 00:05:31.470 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:31.728 EAL: Restoring previous memory policy: 4 00:05:31.728 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.728 EAL: request: mp_malloc_sync 00:05:31.728 EAL: No shared files mode enabled, IPC is disabled 00:05:31.728 EAL: Heap on socket 0 was expanded by 1026MB 00:05:31.728 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.986 EAL: request: mp_malloc_sync 00:05:31.986 EAL: No shared files mode enabled, IPC is disabled 00:05:31.986 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:31.986 passed 00:05:31.986 00:05:31.986 Run Summary: Type Total Ran Passed Failed Inactive 00:05:31.986 suites 1 1 n/a 0 0 00:05:31.986 tests 2 2 2 0 0 00:05:31.986 asserts 497 497 497 0 n/a 00:05:31.986 00:05:31.986 Elapsed time = 0.965 seconds 00:05:31.986 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.986 EAL: request: mp_malloc_sync 00:05:31.986 EAL: No shared files mode enabled, IPC is disabled 00:05:31.986 EAL: Heap on socket 0 was shrunk by 2MB 00:05:31.986 EAL: No shared files mode enabled, IPC is disabled 00:05:31.986 EAL: No shared files mode enabled, IPC is disabled 00:05:31.986 EAL: No shared files mode enabled, IPC is disabled 00:05:31.986 00:05:31.986 real 0m1.071s 00:05:31.986 user 0m0.648s 00:05:31.986 sys 0m0.401s 00:05:31.986 14:40:05 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.986 14:40:05 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:31.986 ************************************ 00:05:31.986 END TEST env_vtophys 00:05:31.986 ************************************ 00:05:31.986 14:40:05 env -- common/autotest_common.sh@1142 -- # return 0 00:05:31.986 14:40:05 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:05:31.986 14:40:05 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:31.986 14:40:05 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.986 14:40:05 env -- common/autotest_common.sh@10 -- # set +x 00:05:31.986 ************************************ 00:05:31.986 START TEST env_pci 00:05:31.986 ************************************ 00:05:31.986 14:40:05 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:05:31.986 00:05:31.986 00:05:31.986 CUnit - A unit testing framework for C - Version 2.1-3 00:05:31.986 http://cunit.sourceforge.net/ 00:05:31.986 00:05:31.986 00:05:31.986 Suite: pci 00:05:31.986 Test: pci_hook ...[2024-07-15 14:40:05.826286] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2671605 has claimed it 00:05:31.986 EAL: Cannot find device (10000:00:01.0) 00:05:31.986 EAL: Failed to attach device on primary process 00:05:31.986 passed 00:05:31.986 00:05:31.986 Run Summary: Type Total Ran Passed Failed Inactive 00:05:31.986 suites 1 1 n/a 0 0 00:05:31.986 tests 1 1 1 0 0 00:05:31.986 asserts 25 25 25 0 n/a 00:05:31.986 00:05:31.986 Elapsed time = 0.022 seconds 00:05:31.986 00:05:31.986 real 0m0.036s 00:05:31.986 user 0m0.014s 00:05:31.986 sys 0m0.022s 00:05:31.986 14:40:05 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.986 14:40:05 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:31.986 ************************************ 00:05:31.986 END TEST env_pci 00:05:31.986 ************************************ 00:05:31.986 14:40:05 env -- common/autotest_common.sh@1142 -- # return 0 00:05:31.986 14:40:05 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:31.986 14:40:05 env -- env/env.sh@15 -- # uname 00:05:31.986 14:40:05 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:31.986 14:40:05 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:31.986 14:40:05 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:31.986 14:40:05 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:05:31.986 14:40:05 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.986 14:40:05 env -- common/autotest_common.sh@10 -- # set +x 00:05:32.245 ************************************ 00:05:32.245 START TEST env_dpdk_post_init 00:05:32.245 ************************************ 00:05:32.245 14:40:05 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:32.245 EAL: Detected CPU lcores: 96 00:05:32.245 EAL: Detected NUMA nodes: 2 00:05:32.245 EAL: Detected shared linkage of DPDK 00:05:32.245 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:32.245 EAL: Selected IOVA mode 'VA' 00:05:32.245 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.245 EAL: VFIO support initialized 00:05:32.245 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:32.245 EAL: Using IOMMU type 1 (Type 1) 00:05:32.245 EAL: Ignore mapping IO port bar(1) 00:05:32.245 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:05:32.245 EAL: Ignore mapping IO port bar(1) 00:05:32.245 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:05:32.245 EAL: Ignore mapping IO port bar(1) 00:05:32.245 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:05:32.245 EAL: Ignore mapping IO port bar(1) 00:05:32.245 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:05:32.245 EAL: Ignore mapping IO port bar(1) 00:05:32.245 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:05:32.245 EAL: Ignore mapping IO port bar(1) 00:05:32.245 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:05:32.245 EAL: Ignore mapping IO port bar(1) 00:05:32.245 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:05:32.245 EAL: Ignore mapping IO port bar(1) 00:05:32.245 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:05:33.181 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5f:00.0 (socket 0) 00:05:33.181 EAL: Ignore mapping IO port bar(1) 00:05:33.181 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:05:33.181 EAL: Ignore mapping IO port bar(1) 00:05:33.181 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:05:33.181 EAL: Ignore mapping IO port bar(1) 00:05:33.181 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:05:33.181 EAL: Ignore mapping IO port bar(1) 00:05:33.181 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:05:33.181 EAL: Ignore mapping IO port bar(1) 00:05:33.181 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:05:33.181 EAL: Ignore mapping IO port bar(1) 00:05:33.181 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:05:33.181 EAL: Ignore mapping IO port bar(1) 00:05:33.181 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:05:33.181 EAL: Ignore mapping IO port bar(1) 00:05:33.181 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:05:37.360 EAL: Releasing PCI mapped resource for 0000:5f:00.0 00:05:37.360 EAL: Calling pci_unmap_resource for 0000:5f:00.0 at 0x202001020000 00:05:37.360 Starting DPDK initialization... 00:05:37.360 Starting SPDK post initialization... 00:05:37.360 SPDK NVMe probe 00:05:37.360 Attaching to 0000:5f:00.0 00:05:37.360 Attached to 0000:5f:00.0 00:05:37.360 Cleaning up... 00:05:37.360 00:05:37.360 real 0m4.908s 00:05:37.360 user 0m3.830s 00:05:37.360 sys 0m0.153s 00:05:37.360 14:40:10 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:37.360 14:40:10 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:37.360 ************************************ 00:05:37.360 END TEST env_dpdk_post_init 00:05:37.360 ************************************ 00:05:37.360 14:40:10 env -- common/autotest_common.sh@1142 -- # return 0 00:05:37.360 14:40:10 env -- env/env.sh@26 -- # uname 00:05:37.360 14:40:10 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:37.360 14:40:10 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:37.360 14:40:10 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:37.360 14:40:10 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.360 14:40:10 env -- common/autotest_common.sh@10 -- # set +x 00:05:37.360 ************************************ 00:05:37.360 START TEST env_mem_callbacks 00:05:37.360 ************************************ 00:05:37.360 14:40:10 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:37.360 EAL: Detected CPU lcores: 96 00:05:37.360 EAL: Detected NUMA nodes: 2 00:05:37.360 EAL: Detected shared linkage of DPDK 00:05:37.360 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:37.360 EAL: Selected IOVA mode 'VA' 00:05:37.360 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.360 EAL: VFIO support initialized 00:05:37.360 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:37.360 00:05:37.360 00:05:37.360 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.360 http://cunit.sourceforge.net/ 00:05:37.360 00:05:37.360 00:05:37.360 Suite: memory 00:05:37.360 Test: test ... 00:05:37.360 register 0x200000200000 2097152 00:05:37.360 malloc 3145728 00:05:37.360 register 0x200000400000 4194304 00:05:37.360 buf 0x200000500000 len 3145728 PASSED 00:05:37.360 malloc 64 00:05:37.360 buf 0x2000004fff40 len 64 PASSED 00:05:37.360 malloc 4194304 00:05:37.360 register 0x200000800000 6291456 00:05:37.360 buf 0x200000a00000 len 4194304 PASSED 00:05:37.360 free 0x200000500000 3145728 00:05:37.360 free 0x2000004fff40 64 00:05:37.360 unregister 0x200000400000 4194304 PASSED 00:05:37.360 free 0x200000a00000 4194304 00:05:37.360 unregister 0x200000800000 6291456 PASSED 00:05:37.360 malloc 8388608 00:05:37.360 register 0x200000400000 10485760 00:05:37.360 buf 0x200000600000 len 8388608 PASSED 00:05:37.360 free 0x200000600000 8388608 00:05:37.360 unregister 0x200000400000 10485760 PASSED 00:05:37.360 passed 00:05:37.360 00:05:37.360 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.360 suites 1 1 n/a 0 0 00:05:37.360 tests 1 1 1 0 0 00:05:37.360 asserts 15 15 15 0 n/a 00:05:37.360 00:05:37.360 Elapsed time = 0.005 seconds 00:05:37.360 00:05:37.360 real 0m0.053s 00:05:37.361 user 0m0.021s 00:05:37.361 sys 0m0.032s 00:05:37.361 14:40:10 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:37.361 14:40:10 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:37.361 ************************************ 00:05:37.361 END TEST env_mem_callbacks 00:05:37.361 ************************************ 00:05:37.361 14:40:10 env -- common/autotest_common.sh@1142 -- # return 0 00:05:37.361 00:05:37.361 real 0m6.629s 00:05:37.361 user 0m4.836s 00:05:37.361 sys 0m0.873s 00:05:37.361 14:40:10 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:37.361 14:40:10 env -- common/autotest_common.sh@10 -- # set +x 00:05:37.361 ************************************ 00:05:37.361 END TEST env 00:05:37.361 ************************************ 00:05:37.361 14:40:11 -- common/autotest_common.sh@1142 -- # return 0 00:05:37.361 14:40:11 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:05:37.361 14:40:11 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:37.361 14:40:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.361 14:40:11 -- common/autotest_common.sh@10 -- # set +x 00:05:37.361 ************************************ 00:05:37.361 START TEST rpc 00:05:37.361 ************************************ 00:05:37.361 14:40:11 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:05:37.361 * Looking for test storage... 00:05:37.361 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:37.361 14:40:11 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2672643 00:05:37.361 14:40:11 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:37.361 14:40:11 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2672643 00:05:37.361 14:40:11 rpc -- common/autotest_common.sh@829 -- # '[' -z 2672643 ']' 00:05:37.361 14:40:11 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.361 14:40:11 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:37.361 14:40:11 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:37.361 14:40:11 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.361 14:40:11 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:37.361 14:40:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.361 [2024-07-15 14:40:11.179564] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:05:37.361 [2024-07-15 14:40:11.179608] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2672643 ] 00:05:37.361 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.361 [2024-07-15 14:40:11.233203] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.618 [2024-07-15 14:40:11.313128] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:37.618 [2024-07-15 14:40:11.313160] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2672643' to capture a snapshot of events at runtime. 00:05:37.618 [2024-07-15 14:40:11.313167] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:37.618 [2024-07-15 14:40:11.313173] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:37.618 [2024-07-15 14:40:11.313178] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2672643 for offline analysis/debug. 00:05:37.618 [2024-07-15 14:40:11.313196] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.183 14:40:11 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:38.183 14:40:11 rpc -- common/autotest_common.sh@862 -- # return 0 00:05:38.183 14:40:11 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:38.183 14:40:11 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:38.183 14:40:11 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:38.183 14:40:11 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:38.183 14:40:11 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:38.183 14:40:11 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.183 14:40:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.183 ************************************ 00:05:38.183 START TEST rpc_integrity 00:05:38.183 ************************************ 00:05:38.183 14:40:11 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:38.183 14:40:11 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:38.183 14:40:12 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:38.183 14:40:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.183 14:40:12 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:38.183 14:40:12 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:38.183 14:40:12 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:38.183 14:40:12 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:38.183 14:40:12 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:38.183 14:40:12 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:38.183 14:40:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.183 14:40:12 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:38.183 14:40:12 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:38.183 14:40:12 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:38.183 14:40:12 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:38.183 14:40:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.183 14:40:12 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:38.183 14:40:12 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:38.183 { 00:05:38.183 "name": "Malloc0", 00:05:38.183 "aliases": [ 00:05:38.183 "95bf9fd3-eb5c-41ed-b9c6-5002a4052ba0" 00:05:38.183 ], 00:05:38.183 "product_name": "Malloc disk", 00:05:38.183 "block_size": 512, 00:05:38.183 "num_blocks": 16384, 00:05:38.183 "uuid": "95bf9fd3-eb5c-41ed-b9c6-5002a4052ba0", 00:05:38.183 "assigned_rate_limits": { 00:05:38.183 "rw_ios_per_sec": 0, 00:05:38.183 "rw_mbytes_per_sec": 0, 00:05:38.183 "r_mbytes_per_sec": 0, 00:05:38.183 "w_mbytes_per_sec": 0 00:05:38.183 }, 00:05:38.183 "claimed": false, 00:05:38.183 "zoned": false, 00:05:38.183 "supported_io_types": { 00:05:38.183 "read": true, 00:05:38.183 "write": true, 00:05:38.183 "unmap": true, 00:05:38.183 "flush": true, 00:05:38.183 "reset": true, 00:05:38.183 "nvme_admin": false, 00:05:38.183 "nvme_io": false, 00:05:38.183 "nvme_io_md": false, 00:05:38.183 "write_zeroes": true, 00:05:38.183 "zcopy": true, 00:05:38.183 "get_zone_info": false, 00:05:38.183 "zone_management": false, 00:05:38.183 "zone_append": false, 00:05:38.183 "compare": false, 00:05:38.183 "compare_and_write": false, 00:05:38.183 "abort": true, 00:05:38.183 "seek_hole": false, 00:05:38.183 "seek_data": false, 00:05:38.183 "copy": true, 00:05:38.183 "nvme_iov_md": false 00:05:38.183 }, 00:05:38.183 "memory_domains": [ 00:05:38.183 { 00:05:38.183 "dma_device_id": "system", 00:05:38.183 "dma_device_type": 1 00:05:38.183 }, 00:05:38.183 { 00:05:38.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:38.183 "dma_device_type": 2 00:05:38.183 } 00:05:38.183 ], 00:05:38.183 "driver_specific": {} 00:05:38.183 } 00:05:38.183 ]' 00:05:38.183 14:40:12 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:38.441 14:40:12 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:38.441 14:40:12 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:38.441 14:40:12 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:38.441 14:40:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.441 [2024-07-15 14:40:12.126434] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:38.441 [2024-07-15 14:40:12.126462] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:38.441 [2024-07-15 14:40:12.126472] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x19220d0 00:05:38.441 [2024-07-15 14:40:12.126478] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:38.441 [2024-07-15 14:40:12.127513] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:38.441 [2024-07-15 14:40:12.127535] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:38.441 Passthru0 00:05:38.441 14:40:12 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:38.441 14:40:12 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:38.441 14:40:12 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:38.441 14:40:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.441 14:40:12 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:38.441 14:40:12 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:38.441 { 00:05:38.441 "name": "Malloc0", 00:05:38.441 "aliases": [ 00:05:38.441 "95bf9fd3-eb5c-41ed-b9c6-5002a4052ba0" 00:05:38.441 ], 00:05:38.441 "product_name": "Malloc disk", 00:05:38.441 "block_size": 512, 00:05:38.441 "num_blocks": 16384, 00:05:38.441 "uuid": "95bf9fd3-eb5c-41ed-b9c6-5002a4052ba0", 00:05:38.441 "assigned_rate_limits": { 00:05:38.441 "rw_ios_per_sec": 0, 00:05:38.441 "rw_mbytes_per_sec": 0, 00:05:38.441 "r_mbytes_per_sec": 0, 00:05:38.441 "w_mbytes_per_sec": 0 00:05:38.441 }, 00:05:38.441 "claimed": true, 00:05:38.441 "claim_type": "exclusive_write", 00:05:38.441 "zoned": false, 00:05:38.441 "supported_io_types": { 00:05:38.441 "read": true, 00:05:38.441 "write": true, 00:05:38.441 "unmap": true, 00:05:38.441 "flush": true, 00:05:38.441 "reset": true, 00:05:38.441 "nvme_admin": false, 00:05:38.441 "nvme_io": false, 00:05:38.441 "nvme_io_md": false, 00:05:38.441 "write_zeroes": true, 00:05:38.441 "zcopy": true, 00:05:38.441 "get_zone_info": false, 00:05:38.441 "zone_management": false, 00:05:38.441 "zone_append": false, 00:05:38.441 "compare": false, 00:05:38.441 "compare_and_write": false, 00:05:38.441 "abort": true, 00:05:38.441 "seek_hole": false, 00:05:38.441 "seek_data": false, 00:05:38.441 "copy": true, 00:05:38.441 "nvme_iov_md": false 00:05:38.441 }, 00:05:38.441 "memory_domains": [ 00:05:38.441 { 00:05:38.441 "dma_device_id": "system", 00:05:38.442 "dma_device_type": 1 00:05:38.442 }, 00:05:38.442 { 00:05:38.442 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:38.442 "dma_device_type": 2 00:05:38.442 } 00:05:38.442 ], 00:05:38.442 "driver_specific": {} 00:05:38.442 }, 00:05:38.442 { 00:05:38.442 "name": "Passthru0", 00:05:38.442 "aliases": [ 00:05:38.442 "fd81afcd-b392-54ef-a4fa-44601a6b5942" 00:05:38.442 ], 00:05:38.442 "product_name": "passthru", 00:05:38.442 "block_size": 512, 00:05:38.442 "num_blocks": 16384, 00:05:38.442 "uuid": "fd81afcd-b392-54ef-a4fa-44601a6b5942", 00:05:38.442 "assigned_rate_limits": { 00:05:38.442 "rw_ios_per_sec": 0, 00:05:38.442 "rw_mbytes_per_sec": 0, 00:05:38.442 "r_mbytes_per_sec": 0, 00:05:38.442 "w_mbytes_per_sec": 0 00:05:38.442 }, 00:05:38.442 "claimed": false, 00:05:38.442 "zoned": false, 00:05:38.442 "supported_io_types": { 00:05:38.442 "read": true, 00:05:38.442 "write": true, 00:05:38.442 "unmap": true, 00:05:38.442 "flush": true, 00:05:38.442 "reset": true, 00:05:38.442 "nvme_admin": false, 00:05:38.442 "nvme_io": false, 00:05:38.442 "nvme_io_md": false, 00:05:38.442 "write_zeroes": true, 00:05:38.442 "zcopy": true, 00:05:38.442 "get_zone_info": false, 00:05:38.442 "zone_management": false, 00:05:38.442 "zone_append": false, 00:05:38.442 "compare": false, 00:05:38.442 "compare_and_write": false, 00:05:38.442 "abort": true, 00:05:38.442 "seek_hole": false, 00:05:38.442 "seek_data": false, 00:05:38.442 "copy": true, 00:05:38.442 "nvme_iov_md": false 00:05:38.442 }, 00:05:38.442 "memory_domains": [ 00:05:38.442 { 00:05:38.442 "dma_device_id": "system", 00:05:38.442 "dma_device_type": 1 00:05:38.442 }, 00:05:38.442 { 00:05:38.442 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:38.442 "dma_device_type": 2 00:05:38.442 } 00:05:38.442 ], 00:05:38.442 "driver_specific": { 00:05:38.442 "passthru": { 00:05:38.442 "name": "Passthru0", 00:05:38.442 "base_bdev_name": "Malloc0" 00:05:38.442 } 00:05:38.442 } 00:05:38.442 } 00:05:38.442 ]' 00:05:38.442 14:40:12 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:38.442 14:40:12 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:38.442 14:40:12 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:38.442 14:40:12 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:38.442 14:40:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.442 14:40:12 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:38.442 14:40:12 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:38.442 14:40:12 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:38.442 14:40:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.442 14:40:12 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:38.442 14:40:12 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:38.442 14:40:12 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:38.442 14:40:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.442 14:40:12 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:38.442 14:40:12 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:38.442 14:40:12 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:38.442 14:40:12 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:38.442 00:05:38.442 real 0m0.265s 00:05:38.442 user 0m0.166s 00:05:38.442 sys 0m0.035s 00:05:38.442 14:40:12 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:38.442 14:40:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.442 ************************************ 00:05:38.442 END TEST rpc_integrity 00:05:38.442 ************************************ 00:05:38.442 14:40:12 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:38.442 14:40:12 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:38.442 14:40:12 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:38.442 14:40:12 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.442 14:40:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.442 ************************************ 00:05:38.442 START TEST rpc_plugins 00:05:38.442 ************************************ 00:05:38.442 14:40:12 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:05:38.442 14:40:12 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:38.442 14:40:12 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:38.442 14:40:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:38.442 14:40:12 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:38.442 14:40:12 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:38.442 14:40:12 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:38.442 14:40:12 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:38.442 14:40:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:38.700 14:40:12 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:38.700 14:40:12 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:38.700 { 00:05:38.700 "name": "Malloc1", 00:05:38.700 "aliases": [ 00:05:38.700 "6c2fae85-4042-4813-940d-4583b8485fe1" 00:05:38.700 ], 00:05:38.700 "product_name": "Malloc disk", 00:05:38.700 "block_size": 4096, 00:05:38.700 "num_blocks": 256, 00:05:38.700 "uuid": "6c2fae85-4042-4813-940d-4583b8485fe1", 00:05:38.700 "assigned_rate_limits": { 00:05:38.700 "rw_ios_per_sec": 0, 00:05:38.700 "rw_mbytes_per_sec": 0, 00:05:38.700 "r_mbytes_per_sec": 0, 00:05:38.700 "w_mbytes_per_sec": 0 00:05:38.700 }, 00:05:38.700 "claimed": false, 00:05:38.700 "zoned": false, 00:05:38.700 "supported_io_types": { 00:05:38.700 "read": true, 00:05:38.700 "write": true, 00:05:38.700 "unmap": true, 00:05:38.700 "flush": true, 00:05:38.700 "reset": true, 00:05:38.700 "nvme_admin": false, 00:05:38.700 "nvme_io": false, 00:05:38.700 "nvme_io_md": false, 00:05:38.700 "write_zeroes": true, 00:05:38.700 "zcopy": true, 00:05:38.700 "get_zone_info": false, 00:05:38.700 "zone_management": false, 00:05:38.700 "zone_append": false, 00:05:38.700 "compare": false, 00:05:38.700 "compare_and_write": false, 00:05:38.700 "abort": true, 00:05:38.700 "seek_hole": false, 00:05:38.700 "seek_data": false, 00:05:38.700 "copy": true, 00:05:38.700 "nvme_iov_md": false 00:05:38.700 }, 00:05:38.700 "memory_domains": [ 00:05:38.700 { 00:05:38.700 "dma_device_id": "system", 00:05:38.700 "dma_device_type": 1 00:05:38.700 }, 00:05:38.700 { 00:05:38.700 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:38.700 "dma_device_type": 2 00:05:38.700 } 00:05:38.700 ], 00:05:38.700 "driver_specific": {} 00:05:38.700 } 00:05:38.700 ]' 00:05:38.700 14:40:12 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:38.700 14:40:12 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:38.700 14:40:12 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:38.700 14:40:12 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:38.700 14:40:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:38.700 14:40:12 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:38.700 14:40:12 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:38.700 14:40:12 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:38.700 14:40:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:38.700 14:40:12 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:38.700 14:40:12 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:38.700 14:40:12 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:38.700 14:40:12 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:38.700 00:05:38.700 real 0m0.142s 00:05:38.700 user 0m0.085s 00:05:38.700 sys 0m0.021s 00:05:38.700 14:40:12 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:38.700 14:40:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:38.700 ************************************ 00:05:38.700 END TEST rpc_plugins 00:05:38.700 ************************************ 00:05:38.700 14:40:12 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:38.700 14:40:12 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:38.700 14:40:12 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:38.700 14:40:12 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.700 14:40:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.700 ************************************ 00:05:38.700 START TEST rpc_trace_cmd_test 00:05:38.700 ************************************ 00:05:38.700 14:40:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:05:38.700 14:40:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:38.700 14:40:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:38.700 14:40:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:38.700 14:40:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:38.700 14:40:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:38.700 14:40:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:38.700 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2672643", 00:05:38.700 "tpoint_group_mask": "0x8", 00:05:38.700 "iscsi_conn": { 00:05:38.700 "mask": "0x2", 00:05:38.700 "tpoint_mask": "0x0" 00:05:38.700 }, 00:05:38.700 "scsi": { 00:05:38.700 "mask": "0x4", 00:05:38.700 "tpoint_mask": "0x0" 00:05:38.700 }, 00:05:38.700 "bdev": { 00:05:38.700 "mask": "0x8", 00:05:38.700 "tpoint_mask": "0xffffffffffffffff" 00:05:38.700 }, 00:05:38.700 "nvmf_rdma": { 00:05:38.700 "mask": "0x10", 00:05:38.700 "tpoint_mask": "0x0" 00:05:38.700 }, 00:05:38.700 "nvmf_tcp": { 00:05:38.700 "mask": "0x20", 00:05:38.700 "tpoint_mask": "0x0" 00:05:38.700 }, 00:05:38.700 "ftl": { 00:05:38.700 "mask": "0x40", 00:05:38.700 "tpoint_mask": "0x0" 00:05:38.700 }, 00:05:38.700 "blobfs": { 00:05:38.700 "mask": "0x80", 00:05:38.700 "tpoint_mask": "0x0" 00:05:38.700 }, 00:05:38.700 "dsa": { 00:05:38.700 "mask": "0x200", 00:05:38.700 "tpoint_mask": "0x0" 00:05:38.700 }, 00:05:38.700 "thread": { 00:05:38.700 "mask": "0x400", 00:05:38.700 "tpoint_mask": "0x0" 00:05:38.700 }, 00:05:38.700 "nvme_pcie": { 00:05:38.700 "mask": "0x800", 00:05:38.700 "tpoint_mask": "0x0" 00:05:38.700 }, 00:05:38.700 "iaa": { 00:05:38.700 "mask": "0x1000", 00:05:38.700 "tpoint_mask": "0x0" 00:05:38.700 }, 00:05:38.700 "nvme_tcp": { 00:05:38.700 "mask": "0x2000", 00:05:38.700 "tpoint_mask": "0x0" 00:05:38.700 }, 00:05:38.700 "bdev_nvme": { 00:05:38.700 "mask": "0x4000", 00:05:38.700 "tpoint_mask": "0x0" 00:05:38.700 }, 00:05:38.700 "sock": { 00:05:38.700 "mask": "0x8000", 00:05:38.700 "tpoint_mask": "0x0" 00:05:38.700 } 00:05:38.700 }' 00:05:38.700 14:40:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:38.700 14:40:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:38.700 14:40:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:38.958 14:40:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:38.958 14:40:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:38.958 14:40:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:38.958 14:40:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:38.958 14:40:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:38.958 14:40:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:38.958 14:40:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:38.958 00:05:38.958 real 0m0.225s 00:05:38.958 user 0m0.197s 00:05:38.958 sys 0m0.020s 00:05:38.958 14:40:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:38.958 14:40:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:38.958 ************************************ 00:05:38.958 END TEST rpc_trace_cmd_test 00:05:38.958 ************************************ 00:05:38.958 14:40:12 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:38.958 14:40:12 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:38.958 14:40:12 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:38.958 14:40:12 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:38.958 14:40:12 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:38.958 14:40:12 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.958 14:40:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.958 ************************************ 00:05:38.958 START TEST rpc_daemon_integrity 00:05:38.958 ************************************ 00:05:38.958 14:40:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:38.958 14:40:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:38.958 14:40:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:38.958 14:40:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.958 14:40:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:38.958 14:40:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:38.958 14:40:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:39.215 14:40:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:39.215 14:40:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:39.215 14:40:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.215 14:40:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.215 14:40:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.215 14:40:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:39.215 14:40:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:39.215 14:40:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.215 14:40:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.215 14:40:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.215 14:40:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:39.215 { 00:05:39.215 "name": "Malloc2", 00:05:39.215 "aliases": [ 00:05:39.215 "e597e887-d352-482a-a4d7-f7711409b93d" 00:05:39.215 ], 00:05:39.215 "product_name": "Malloc disk", 00:05:39.215 "block_size": 512, 00:05:39.215 "num_blocks": 16384, 00:05:39.215 "uuid": "e597e887-d352-482a-a4d7-f7711409b93d", 00:05:39.215 "assigned_rate_limits": { 00:05:39.215 "rw_ios_per_sec": 0, 00:05:39.215 "rw_mbytes_per_sec": 0, 00:05:39.215 "r_mbytes_per_sec": 0, 00:05:39.215 "w_mbytes_per_sec": 0 00:05:39.215 }, 00:05:39.215 "claimed": false, 00:05:39.215 "zoned": false, 00:05:39.215 "supported_io_types": { 00:05:39.215 "read": true, 00:05:39.215 "write": true, 00:05:39.215 "unmap": true, 00:05:39.215 "flush": true, 00:05:39.215 "reset": true, 00:05:39.215 "nvme_admin": false, 00:05:39.215 "nvme_io": false, 00:05:39.215 "nvme_io_md": false, 00:05:39.215 "write_zeroes": true, 00:05:39.215 "zcopy": true, 00:05:39.215 "get_zone_info": false, 00:05:39.215 "zone_management": false, 00:05:39.215 "zone_append": false, 00:05:39.215 "compare": false, 00:05:39.215 "compare_and_write": false, 00:05:39.215 "abort": true, 00:05:39.215 "seek_hole": false, 00:05:39.215 "seek_data": false, 00:05:39.215 "copy": true, 00:05:39.215 "nvme_iov_md": false 00:05:39.215 }, 00:05:39.215 "memory_domains": [ 00:05:39.215 { 00:05:39.215 "dma_device_id": "system", 00:05:39.215 "dma_device_type": 1 00:05:39.215 }, 00:05:39.215 { 00:05:39.215 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:39.215 "dma_device_type": 2 00:05:39.215 } 00:05:39.215 ], 00:05:39.215 "driver_specific": {} 00:05:39.215 } 00:05:39.215 ]' 00:05:39.215 14:40:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:39.215 14:40:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:39.215 14:40:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:39.215 14:40:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.215 14:40:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.215 [2024-07-15 14:40:12.956737] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:39.215 [2024-07-15 14:40:12.956764] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:39.215 [2024-07-15 14:40:12.956774] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x19226b0 00:05:39.215 [2024-07-15 14:40:12.956780] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:39.215 [2024-07-15 14:40:12.957723] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:39.215 [2024-07-15 14:40:12.957743] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:39.215 Passthru0 00:05:39.215 14:40:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.215 14:40:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:39.215 14:40:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.215 14:40:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.215 14:40:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.215 14:40:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:39.215 { 00:05:39.215 "name": "Malloc2", 00:05:39.215 "aliases": [ 00:05:39.215 "e597e887-d352-482a-a4d7-f7711409b93d" 00:05:39.215 ], 00:05:39.215 "product_name": "Malloc disk", 00:05:39.215 "block_size": 512, 00:05:39.215 "num_blocks": 16384, 00:05:39.215 "uuid": "e597e887-d352-482a-a4d7-f7711409b93d", 00:05:39.215 "assigned_rate_limits": { 00:05:39.215 "rw_ios_per_sec": 0, 00:05:39.215 "rw_mbytes_per_sec": 0, 00:05:39.215 "r_mbytes_per_sec": 0, 00:05:39.215 "w_mbytes_per_sec": 0 00:05:39.215 }, 00:05:39.215 "claimed": true, 00:05:39.215 "claim_type": "exclusive_write", 00:05:39.215 "zoned": false, 00:05:39.215 "supported_io_types": { 00:05:39.215 "read": true, 00:05:39.215 "write": true, 00:05:39.215 "unmap": true, 00:05:39.215 "flush": true, 00:05:39.215 "reset": true, 00:05:39.215 "nvme_admin": false, 00:05:39.215 "nvme_io": false, 00:05:39.215 "nvme_io_md": false, 00:05:39.215 "write_zeroes": true, 00:05:39.215 "zcopy": true, 00:05:39.215 "get_zone_info": false, 00:05:39.216 "zone_management": false, 00:05:39.216 "zone_append": false, 00:05:39.216 "compare": false, 00:05:39.216 "compare_and_write": false, 00:05:39.216 "abort": true, 00:05:39.216 "seek_hole": false, 00:05:39.216 "seek_data": false, 00:05:39.216 "copy": true, 00:05:39.216 "nvme_iov_md": false 00:05:39.216 }, 00:05:39.216 "memory_domains": [ 00:05:39.216 { 00:05:39.216 "dma_device_id": "system", 00:05:39.216 "dma_device_type": 1 00:05:39.216 }, 00:05:39.216 { 00:05:39.216 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:39.216 "dma_device_type": 2 00:05:39.216 } 00:05:39.216 ], 00:05:39.216 "driver_specific": {} 00:05:39.216 }, 00:05:39.216 { 00:05:39.216 "name": "Passthru0", 00:05:39.216 "aliases": [ 00:05:39.216 "cbe7251e-cd3e-5b6f-950a-f62d6990ecc1" 00:05:39.216 ], 00:05:39.216 "product_name": "passthru", 00:05:39.216 "block_size": 512, 00:05:39.216 "num_blocks": 16384, 00:05:39.216 "uuid": "cbe7251e-cd3e-5b6f-950a-f62d6990ecc1", 00:05:39.216 "assigned_rate_limits": { 00:05:39.216 "rw_ios_per_sec": 0, 00:05:39.216 "rw_mbytes_per_sec": 0, 00:05:39.216 "r_mbytes_per_sec": 0, 00:05:39.216 "w_mbytes_per_sec": 0 00:05:39.216 }, 00:05:39.216 "claimed": false, 00:05:39.216 "zoned": false, 00:05:39.216 "supported_io_types": { 00:05:39.216 "read": true, 00:05:39.216 "write": true, 00:05:39.216 "unmap": true, 00:05:39.216 "flush": true, 00:05:39.216 "reset": true, 00:05:39.216 "nvme_admin": false, 00:05:39.216 "nvme_io": false, 00:05:39.216 "nvme_io_md": false, 00:05:39.216 "write_zeroes": true, 00:05:39.216 "zcopy": true, 00:05:39.216 "get_zone_info": false, 00:05:39.216 "zone_management": false, 00:05:39.216 "zone_append": false, 00:05:39.216 "compare": false, 00:05:39.216 "compare_and_write": false, 00:05:39.216 "abort": true, 00:05:39.216 "seek_hole": false, 00:05:39.216 "seek_data": false, 00:05:39.216 "copy": true, 00:05:39.216 "nvme_iov_md": false 00:05:39.216 }, 00:05:39.216 "memory_domains": [ 00:05:39.216 { 00:05:39.216 "dma_device_id": "system", 00:05:39.216 "dma_device_type": 1 00:05:39.216 }, 00:05:39.216 { 00:05:39.216 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:39.216 "dma_device_type": 2 00:05:39.216 } 00:05:39.216 ], 00:05:39.216 "driver_specific": { 00:05:39.216 "passthru": { 00:05:39.216 "name": "Passthru0", 00:05:39.216 "base_bdev_name": "Malloc2" 00:05:39.216 } 00:05:39.216 } 00:05:39.216 } 00:05:39.216 ]' 00:05:39.216 14:40:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:39.216 14:40:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:39.216 14:40:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:39.216 14:40:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.216 14:40:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.216 14:40:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.216 14:40:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:39.216 14:40:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.216 14:40:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.216 14:40:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.216 14:40:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:39.216 14:40:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:39.216 14:40:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.216 14:40:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:39.216 14:40:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:39.216 14:40:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:39.216 14:40:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:39.216 00:05:39.216 real 0m0.259s 00:05:39.216 user 0m0.169s 00:05:39.216 sys 0m0.029s 00:05:39.216 14:40:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:39.216 14:40:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:39.216 ************************************ 00:05:39.216 END TEST rpc_daemon_integrity 00:05:39.216 ************************************ 00:05:39.216 14:40:13 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:39.216 14:40:13 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:39.216 14:40:13 rpc -- rpc/rpc.sh@84 -- # killprocess 2672643 00:05:39.216 14:40:13 rpc -- common/autotest_common.sh@948 -- # '[' -z 2672643 ']' 00:05:39.216 14:40:13 rpc -- common/autotest_common.sh@952 -- # kill -0 2672643 00:05:39.216 14:40:13 rpc -- common/autotest_common.sh@953 -- # uname 00:05:39.216 14:40:13 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:39.216 14:40:13 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2672643 00:05:39.475 14:40:13 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:39.475 14:40:13 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:39.475 14:40:13 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2672643' 00:05:39.475 killing process with pid 2672643 00:05:39.475 14:40:13 rpc -- common/autotest_common.sh@967 -- # kill 2672643 00:05:39.475 14:40:13 rpc -- common/autotest_common.sh@972 -- # wait 2672643 00:05:39.733 00:05:39.733 real 0m2.429s 00:05:39.733 user 0m3.138s 00:05:39.733 sys 0m0.654s 00:05:39.733 14:40:13 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:39.733 14:40:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.733 ************************************ 00:05:39.733 END TEST rpc 00:05:39.733 ************************************ 00:05:39.733 14:40:13 -- common/autotest_common.sh@1142 -- # return 0 00:05:39.733 14:40:13 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:39.733 14:40:13 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:39.733 14:40:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.733 14:40:13 -- common/autotest_common.sh@10 -- # set +x 00:05:39.733 ************************************ 00:05:39.733 START TEST skip_rpc 00:05:39.733 ************************************ 00:05:39.733 14:40:13 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:39.733 * Looking for test storage... 00:05:39.733 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:39.733 14:40:13 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:05:39.733 14:40:13 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:05:39.733 14:40:13 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:39.733 14:40:13 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:39.733 14:40:13 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.733 14:40:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.733 ************************************ 00:05:39.733 START TEST skip_rpc 00:05:39.733 ************************************ 00:05:39.733 14:40:13 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:05:39.733 14:40:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2673272 00:05:39.733 14:40:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:39.733 14:40:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:39.992 14:40:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:39.992 [2024-07-15 14:40:13.698375] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:05:39.992 [2024-07-15 14:40:13.698412] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2673272 ] 00:05:39.992 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.992 [2024-07-15 14:40:13.751365] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.992 [2024-07-15 14:40:13.822728] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.396 14:40:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:45.396 14:40:18 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:45.396 14:40:18 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:45.396 14:40:18 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:45.396 14:40:18 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:45.396 14:40:18 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:45.396 14:40:18 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:45.396 14:40:18 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:45.396 14:40:18 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.396 14:40:18 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.396 14:40:18 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:45.396 14:40:18 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:45.396 14:40:18 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:45.396 14:40:18 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:45.396 14:40:18 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:45.396 14:40:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:45.396 14:40:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2673272 00:05:45.396 14:40:18 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 2673272 ']' 00:05:45.396 14:40:18 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 2673272 00:05:45.396 14:40:18 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:05:45.396 14:40:18 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:45.396 14:40:18 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2673272 00:05:45.396 14:40:18 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:45.396 14:40:18 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:45.396 14:40:18 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2673272' 00:05:45.396 killing process with pid 2673272 00:05:45.396 14:40:18 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 2673272 00:05:45.396 14:40:18 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 2673272 00:05:45.396 00:05:45.396 real 0m5.364s 00:05:45.396 user 0m5.141s 00:05:45.396 sys 0m0.250s 00:05:45.396 14:40:19 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:45.396 14:40:19 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.396 ************************************ 00:05:45.396 END TEST skip_rpc 00:05:45.396 ************************************ 00:05:45.396 14:40:19 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:45.396 14:40:19 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:45.396 14:40:19 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:45.396 14:40:19 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.396 14:40:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.396 ************************************ 00:05:45.396 START TEST skip_rpc_with_json 00:05:45.396 ************************************ 00:05:45.396 14:40:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:05:45.396 14:40:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:45.396 14:40:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2674226 00:05:45.396 14:40:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:45.396 14:40:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:45.396 14:40:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2674226 00:05:45.396 14:40:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 2674226 ']' 00:05:45.396 14:40:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.396 14:40:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:45.396 14:40:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.396 14:40:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:45.396 14:40:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:45.396 [2024-07-15 14:40:19.135797] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:05:45.396 [2024-07-15 14:40:19.135840] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2674226 ] 00:05:45.396 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.396 [2024-07-15 14:40:19.191418] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.396 [2024-07-15 14:40:19.259209] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.326 14:40:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:46.326 14:40:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:05:46.326 14:40:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:46.326 14:40:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.326 14:40:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:46.326 [2024-07-15 14:40:19.938492] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:46.326 request: 00:05:46.326 { 00:05:46.326 "trtype": "tcp", 00:05:46.326 "method": "nvmf_get_transports", 00:05:46.326 "req_id": 1 00:05:46.326 } 00:05:46.326 Got JSON-RPC error response 00:05:46.326 response: 00:05:46.326 { 00:05:46.326 "code": -19, 00:05:46.326 "message": "No such device" 00:05:46.326 } 00:05:46.326 14:40:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:46.326 14:40:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:46.326 14:40:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.326 14:40:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:46.326 [2024-07-15 14:40:19.946641] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:46.326 14:40:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.326 14:40:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:46.326 14:40:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:46.326 14:40:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:46.326 14:40:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:46.326 14:40:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:05:46.326 { 00:05:46.326 "subsystems": [ 00:05:46.326 { 00:05:46.326 "subsystem": "keyring", 00:05:46.326 "config": [] 00:05:46.326 }, 00:05:46.326 { 00:05:46.326 "subsystem": "iobuf", 00:05:46.326 "config": [ 00:05:46.326 { 00:05:46.326 "method": "iobuf_set_options", 00:05:46.326 "params": { 00:05:46.326 "small_pool_count": 8192, 00:05:46.326 "large_pool_count": 1024, 00:05:46.326 "small_bufsize": 8192, 00:05:46.326 "large_bufsize": 135168 00:05:46.326 } 00:05:46.326 } 00:05:46.326 ] 00:05:46.326 }, 00:05:46.326 { 00:05:46.326 "subsystem": "sock", 00:05:46.326 "config": [ 00:05:46.326 { 00:05:46.326 "method": "sock_set_default_impl", 00:05:46.326 "params": { 00:05:46.326 "impl_name": "posix" 00:05:46.326 } 00:05:46.326 }, 00:05:46.326 { 00:05:46.326 "method": "sock_impl_set_options", 00:05:46.326 "params": { 00:05:46.326 "impl_name": "ssl", 00:05:46.326 "recv_buf_size": 4096, 00:05:46.326 "send_buf_size": 4096, 00:05:46.326 "enable_recv_pipe": true, 00:05:46.326 "enable_quickack": false, 00:05:46.326 "enable_placement_id": 0, 00:05:46.326 "enable_zerocopy_send_server": true, 00:05:46.326 "enable_zerocopy_send_client": false, 00:05:46.326 "zerocopy_threshold": 0, 00:05:46.326 "tls_version": 0, 00:05:46.326 "enable_ktls": false 00:05:46.326 } 00:05:46.326 }, 00:05:46.326 { 00:05:46.326 "method": "sock_impl_set_options", 00:05:46.326 "params": { 00:05:46.326 "impl_name": "posix", 00:05:46.326 "recv_buf_size": 2097152, 00:05:46.326 "send_buf_size": 2097152, 00:05:46.326 "enable_recv_pipe": true, 00:05:46.326 "enable_quickack": false, 00:05:46.326 "enable_placement_id": 0, 00:05:46.326 "enable_zerocopy_send_server": true, 00:05:46.326 "enable_zerocopy_send_client": false, 00:05:46.326 "zerocopy_threshold": 0, 00:05:46.326 "tls_version": 0, 00:05:46.326 "enable_ktls": false 00:05:46.326 } 00:05:46.326 } 00:05:46.326 ] 00:05:46.326 }, 00:05:46.326 { 00:05:46.326 "subsystem": "vmd", 00:05:46.326 "config": [] 00:05:46.326 }, 00:05:46.326 { 00:05:46.326 "subsystem": "accel", 00:05:46.326 "config": [ 00:05:46.326 { 00:05:46.326 "method": "accel_set_options", 00:05:46.326 "params": { 00:05:46.326 "small_cache_size": 128, 00:05:46.326 "large_cache_size": 16, 00:05:46.326 "task_count": 2048, 00:05:46.326 "sequence_count": 2048, 00:05:46.326 "buf_count": 2048 00:05:46.326 } 00:05:46.326 } 00:05:46.326 ] 00:05:46.326 }, 00:05:46.326 { 00:05:46.326 "subsystem": "bdev", 00:05:46.326 "config": [ 00:05:46.326 { 00:05:46.326 "method": "bdev_set_options", 00:05:46.326 "params": { 00:05:46.326 "bdev_io_pool_size": 65535, 00:05:46.326 "bdev_io_cache_size": 256, 00:05:46.326 "bdev_auto_examine": true, 00:05:46.326 "iobuf_small_cache_size": 128, 00:05:46.326 "iobuf_large_cache_size": 16 00:05:46.326 } 00:05:46.326 }, 00:05:46.326 { 00:05:46.326 "method": "bdev_raid_set_options", 00:05:46.326 "params": { 00:05:46.326 "process_window_size_kb": 1024 00:05:46.326 } 00:05:46.326 }, 00:05:46.326 { 00:05:46.326 "method": "bdev_iscsi_set_options", 00:05:46.326 "params": { 00:05:46.326 "timeout_sec": 30 00:05:46.326 } 00:05:46.326 }, 00:05:46.326 { 00:05:46.326 "method": "bdev_nvme_set_options", 00:05:46.326 "params": { 00:05:46.326 "action_on_timeout": "none", 00:05:46.326 "timeout_us": 0, 00:05:46.326 "timeout_admin_us": 0, 00:05:46.326 "keep_alive_timeout_ms": 10000, 00:05:46.326 "arbitration_burst": 0, 00:05:46.326 "low_priority_weight": 0, 00:05:46.326 "medium_priority_weight": 0, 00:05:46.326 "high_priority_weight": 0, 00:05:46.326 "nvme_adminq_poll_period_us": 10000, 00:05:46.326 "nvme_ioq_poll_period_us": 0, 00:05:46.326 "io_queue_requests": 0, 00:05:46.326 "delay_cmd_submit": true, 00:05:46.326 "transport_retry_count": 4, 00:05:46.326 "bdev_retry_count": 3, 00:05:46.326 "transport_ack_timeout": 0, 00:05:46.326 "ctrlr_loss_timeout_sec": 0, 00:05:46.326 "reconnect_delay_sec": 0, 00:05:46.326 "fast_io_fail_timeout_sec": 0, 00:05:46.326 "disable_auto_failback": false, 00:05:46.326 "generate_uuids": false, 00:05:46.326 "transport_tos": 0, 00:05:46.326 "nvme_error_stat": false, 00:05:46.326 "rdma_srq_size": 0, 00:05:46.326 "io_path_stat": false, 00:05:46.326 "allow_accel_sequence": false, 00:05:46.326 "rdma_max_cq_size": 0, 00:05:46.326 "rdma_cm_event_timeout_ms": 0, 00:05:46.326 "dhchap_digests": [ 00:05:46.326 "sha256", 00:05:46.326 "sha384", 00:05:46.326 "sha512" 00:05:46.326 ], 00:05:46.326 "dhchap_dhgroups": [ 00:05:46.326 "null", 00:05:46.326 "ffdhe2048", 00:05:46.326 "ffdhe3072", 00:05:46.326 "ffdhe4096", 00:05:46.326 "ffdhe6144", 00:05:46.326 "ffdhe8192" 00:05:46.326 ] 00:05:46.326 } 00:05:46.326 }, 00:05:46.326 { 00:05:46.326 "method": "bdev_nvme_set_hotplug", 00:05:46.326 "params": { 00:05:46.326 "period_us": 100000, 00:05:46.326 "enable": false 00:05:46.326 } 00:05:46.326 }, 00:05:46.326 { 00:05:46.326 "method": "bdev_wait_for_examine" 00:05:46.326 } 00:05:46.326 ] 00:05:46.326 }, 00:05:46.326 { 00:05:46.326 "subsystem": "scsi", 00:05:46.326 "config": null 00:05:46.326 }, 00:05:46.326 { 00:05:46.326 "subsystem": "scheduler", 00:05:46.326 "config": [ 00:05:46.326 { 00:05:46.326 "method": "framework_set_scheduler", 00:05:46.326 "params": { 00:05:46.326 "name": "static" 00:05:46.326 } 00:05:46.326 } 00:05:46.326 ] 00:05:46.326 }, 00:05:46.326 { 00:05:46.326 "subsystem": "vhost_scsi", 00:05:46.326 "config": [] 00:05:46.326 }, 00:05:46.326 { 00:05:46.326 "subsystem": "vhost_blk", 00:05:46.326 "config": [] 00:05:46.326 }, 00:05:46.326 { 00:05:46.326 "subsystem": "ublk", 00:05:46.326 "config": [] 00:05:46.326 }, 00:05:46.326 { 00:05:46.326 "subsystem": "nbd", 00:05:46.326 "config": [] 00:05:46.326 }, 00:05:46.326 { 00:05:46.326 "subsystem": "nvmf", 00:05:46.326 "config": [ 00:05:46.326 { 00:05:46.326 "method": "nvmf_set_config", 00:05:46.326 "params": { 00:05:46.326 "discovery_filter": "match_any", 00:05:46.326 "admin_cmd_passthru": { 00:05:46.326 "identify_ctrlr": false 00:05:46.326 } 00:05:46.326 } 00:05:46.326 }, 00:05:46.326 { 00:05:46.326 "method": "nvmf_set_max_subsystems", 00:05:46.326 "params": { 00:05:46.326 "max_subsystems": 1024 00:05:46.326 } 00:05:46.326 }, 00:05:46.326 { 00:05:46.326 "method": "nvmf_set_crdt", 00:05:46.326 "params": { 00:05:46.327 "crdt1": 0, 00:05:46.327 "crdt2": 0, 00:05:46.327 "crdt3": 0 00:05:46.327 } 00:05:46.327 }, 00:05:46.327 { 00:05:46.327 "method": "nvmf_create_transport", 00:05:46.327 "params": { 00:05:46.327 "trtype": "TCP", 00:05:46.327 "max_queue_depth": 128, 00:05:46.327 "max_io_qpairs_per_ctrlr": 127, 00:05:46.327 "in_capsule_data_size": 4096, 00:05:46.327 "max_io_size": 131072, 00:05:46.327 "io_unit_size": 131072, 00:05:46.327 "max_aq_depth": 128, 00:05:46.327 "num_shared_buffers": 511, 00:05:46.327 "buf_cache_size": 4294967295, 00:05:46.327 "dif_insert_or_strip": false, 00:05:46.327 "zcopy": false, 00:05:46.327 "c2h_success": true, 00:05:46.327 "sock_priority": 0, 00:05:46.327 "abort_timeout_sec": 1, 00:05:46.327 "ack_timeout": 0, 00:05:46.327 "data_wr_pool_size": 0 00:05:46.327 } 00:05:46.327 } 00:05:46.327 ] 00:05:46.327 }, 00:05:46.327 { 00:05:46.327 "subsystem": "iscsi", 00:05:46.327 "config": [ 00:05:46.327 { 00:05:46.327 "method": "iscsi_set_options", 00:05:46.327 "params": { 00:05:46.327 "node_base": "iqn.2016-06.io.spdk", 00:05:46.327 "max_sessions": 128, 00:05:46.327 "max_connections_per_session": 2, 00:05:46.327 "max_queue_depth": 64, 00:05:46.327 "default_time2wait": 2, 00:05:46.327 "default_time2retain": 20, 00:05:46.327 "first_burst_length": 8192, 00:05:46.327 "immediate_data": true, 00:05:46.327 "allow_duplicated_isid": false, 00:05:46.327 "error_recovery_level": 0, 00:05:46.327 "nop_timeout": 60, 00:05:46.327 "nop_in_interval": 30, 00:05:46.327 "disable_chap": false, 00:05:46.327 "require_chap": false, 00:05:46.327 "mutual_chap": false, 00:05:46.327 "chap_group": 0, 00:05:46.327 "max_large_datain_per_connection": 64, 00:05:46.327 "max_r2t_per_connection": 4, 00:05:46.327 "pdu_pool_size": 36864, 00:05:46.327 "immediate_data_pool_size": 16384, 00:05:46.327 "data_out_pool_size": 2048 00:05:46.327 } 00:05:46.327 } 00:05:46.327 ] 00:05:46.327 } 00:05:46.327 ] 00:05:46.327 } 00:05:46.327 14:40:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:46.327 14:40:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2674226 00:05:46.327 14:40:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 2674226 ']' 00:05:46.327 14:40:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 2674226 00:05:46.327 14:40:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:46.327 14:40:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:46.327 14:40:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2674226 00:05:46.327 14:40:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:46.327 14:40:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:46.327 14:40:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2674226' 00:05:46.327 killing process with pid 2674226 00:05:46.327 14:40:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 2674226 00:05:46.327 14:40:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 2674226 00:05:46.583 14:40:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2674462 00:05:46.583 14:40:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:46.583 14:40:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:05:51.844 14:40:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2674462 00:05:51.844 14:40:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 2674462 ']' 00:05:51.844 14:40:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 2674462 00:05:51.844 14:40:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:51.844 14:40:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:51.844 14:40:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2674462 00:05:51.844 14:40:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:51.844 14:40:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:51.844 14:40:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2674462' 00:05:51.844 killing process with pid 2674462 00:05:51.844 14:40:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 2674462 00:05:51.844 14:40:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 2674462 00:05:52.102 14:40:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:05:52.102 14:40:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:05:52.102 00:05:52.102 real 0m6.705s 00:05:52.102 user 0m6.541s 00:05:52.102 sys 0m0.563s 00:05:52.102 14:40:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:52.102 14:40:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:52.102 ************************************ 00:05:52.102 END TEST skip_rpc_with_json 00:05:52.102 ************************************ 00:05:52.102 14:40:25 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:52.102 14:40:25 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:52.102 14:40:25 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:52.102 14:40:25 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.102 14:40:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.102 ************************************ 00:05:52.102 START TEST skip_rpc_with_delay 00:05:52.102 ************************************ 00:05:52.102 14:40:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:05:52.102 14:40:25 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:52.102 14:40:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:52.102 14:40:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:52.102 14:40:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:52.102 14:40:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:52.102 14:40:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:52.102 14:40:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:52.102 14:40:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:52.102 14:40:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:52.102 14:40:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:52.102 14:40:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:52.102 14:40:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:52.102 [2024-07-15 14:40:25.900811] app.c: 831:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:52.102 [2024-07-15 14:40:25.900876] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:52.102 14:40:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:52.102 14:40:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:52.102 14:40:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:52.102 14:40:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:52.102 00:05:52.102 real 0m0.060s 00:05:52.102 user 0m0.038s 00:05:52.102 sys 0m0.020s 00:05:52.102 14:40:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:52.102 14:40:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:52.102 ************************************ 00:05:52.102 END TEST skip_rpc_with_delay 00:05:52.102 ************************************ 00:05:52.102 14:40:25 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:52.102 14:40:25 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:52.102 14:40:25 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:52.102 14:40:25 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:52.102 14:40:25 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:52.102 14:40:25 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.102 14:40:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.102 ************************************ 00:05:52.102 START TEST exit_on_failed_rpc_init 00:05:52.102 ************************************ 00:05:52.102 14:40:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:05:52.102 14:40:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2675433 00:05:52.102 14:40:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2675433 00:05:52.102 14:40:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 2675433 ']' 00:05:52.102 14:40:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.102 14:40:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:52.102 14:40:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.102 14:40:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:52.102 14:40:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:52.102 14:40:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:52.360 [2024-07-15 14:40:26.022560] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:05:52.360 [2024-07-15 14:40:26.022601] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2675433 ] 00:05:52.360 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.360 [2024-07-15 14:40:26.077063] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.360 [2024-07-15 14:40:26.156015] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.925 14:40:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:52.925 14:40:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:05:52.925 14:40:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:52.925 14:40:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:52.925 14:40:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:52.925 14:40:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:52.925 14:40:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:52.925 14:40:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:52.925 14:40:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:52.925 14:40:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:52.925 14:40:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:52.925 14:40:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:52.925 14:40:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:52.925 14:40:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:52.925 14:40:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:53.182 [2024-07-15 14:40:26.861055] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:05:53.183 [2024-07-15 14:40:26.861101] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2675632 ] 00:05:53.183 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.183 [2024-07-15 14:40:26.914079] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.183 [2024-07-15 14:40:26.985435] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:53.183 [2024-07-15 14:40:26.985517] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:53.183 [2024-07-15 14:40:26.985527] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:53.183 [2024-07-15 14:40:26.985533] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:53.183 14:40:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:53.183 14:40:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:53.183 14:40:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:53.183 14:40:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:53.183 14:40:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:53.183 14:40:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:53.183 14:40:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:53.183 14:40:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2675433 00:05:53.183 14:40:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 2675433 ']' 00:05:53.183 14:40:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 2675433 00:05:53.183 14:40:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:05:53.183 14:40:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:53.183 14:40:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2675433 00:05:53.183 14:40:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:53.183 14:40:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:53.183 14:40:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2675433' 00:05:53.183 killing process with pid 2675433 00:05:53.183 14:40:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 2675433 00:05:53.183 14:40:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 2675433 00:05:53.748 00:05:53.748 real 0m1.432s 00:05:53.748 user 0m1.638s 00:05:53.748 sys 0m0.394s 00:05:53.748 14:40:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:53.748 14:40:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:53.748 ************************************ 00:05:53.748 END TEST exit_on_failed_rpc_init 00:05:53.748 ************************************ 00:05:53.748 14:40:27 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:53.748 14:40:27 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:05:53.748 00:05:53.748 real 0m13.893s 00:05:53.748 user 0m13.489s 00:05:53.748 sys 0m1.448s 00:05:53.748 14:40:27 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:53.748 14:40:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.748 ************************************ 00:05:53.748 END TEST skip_rpc 00:05:53.748 ************************************ 00:05:53.748 14:40:27 -- common/autotest_common.sh@1142 -- # return 0 00:05:53.748 14:40:27 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:53.748 14:40:27 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:53.748 14:40:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.748 14:40:27 -- common/autotest_common.sh@10 -- # set +x 00:05:53.748 ************************************ 00:05:53.748 START TEST rpc_client 00:05:53.748 ************************************ 00:05:53.748 14:40:27 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:53.748 * Looking for test storage... 00:05:53.748 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client 00:05:53.748 14:40:27 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:53.748 OK 00:05:53.748 14:40:27 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:53.748 00:05:53.748 real 0m0.114s 00:05:53.748 user 0m0.055s 00:05:53.748 sys 0m0.065s 00:05:53.748 14:40:27 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:53.748 14:40:27 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:53.748 ************************************ 00:05:53.748 END TEST rpc_client 00:05:53.748 ************************************ 00:05:53.748 14:40:27 -- common/autotest_common.sh@1142 -- # return 0 00:05:53.748 14:40:27 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:05:53.748 14:40:27 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:53.748 14:40:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.748 14:40:27 -- common/autotest_common.sh@10 -- # set +x 00:05:54.005 ************************************ 00:05:54.005 START TEST json_config 00:05:54.005 ************************************ 00:05:54.005 14:40:27 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:05:54.005 14:40:27 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:05:54.005 14:40:27 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:54.005 14:40:27 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:54.005 14:40:27 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:54.005 14:40:27 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:54.005 14:40:27 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:54.005 14:40:27 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:54.006 14:40:27 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:54.006 14:40:27 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:54.006 14:40:27 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:54.006 14:40:27 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:54.006 14:40:27 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:54.006 14:40:27 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:05:54.006 14:40:27 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:05:54.006 14:40:27 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:54.006 14:40:27 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:54.006 14:40:27 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:54.006 14:40:27 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:54.006 14:40:27 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:05:54.006 14:40:27 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:54.006 14:40:27 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:54.006 14:40:27 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:54.006 14:40:27 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:54.006 14:40:27 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:54.006 14:40:27 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:54.006 14:40:27 json_config -- paths/export.sh@5 -- # export PATH 00:05:54.006 14:40:27 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:54.006 14:40:27 json_config -- nvmf/common.sh@47 -- # : 0 00:05:54.006 14:40:27 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:54.006 14:40:27 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:54.006 14:40:27 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:54.006 14:40:27 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:54.006 14:40:27 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:54.006 14:40:27 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:54.006 14:40:27 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:54.006 14:40:27 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:54.006 14:40:27 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:05:54.006 14:40:27 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:54.006 14:40:27 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:54.006 14:40:27 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:54.006 14:40:27 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:54.006 14:40:27 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:54.006 14:40:27 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:54.006 14:40:27 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:54.006 14:40:27 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:54.006 14:40:27 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:54.006 14:40:27 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:54.006 14:40:27 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json') 00:05:54.006 14:40:27 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:54.006 14:40:27 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:54.006 14:40:27 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:54.006 14:40:27 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:54.006 INFO: JSON configuration test init 00:05:54.006 14:40:27 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:54.006 14:40:27 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:54.006 14:40:27 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:54.006 14:40:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:54.006 14:40:27 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:54.006 14:40:27 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:54.006 14:40:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:54.006 14:40:27 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:54.006 14:40:27 json_config -- json_config/common.sh@9 -- # local app=target 00:05:54.006 14:40:27 json_config -- json_config/common.sh@10 -- # shift 00:05:54.006 14:40:27 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:54.006 14:40:27 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:54.006 14:40:27 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:54.006 14:40:27 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:54.006 14:40:27 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:54.006 14:40:27 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2675788 00:05:54.006 14:40:27 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:54.006 Waiting for target to run... 00:05:54.006 14:40:27 json_config -- json_config/common.sh@25 -- # waitforlisten 2675788 /var/tmp/spdk_tgt.sock 00:05:54.006 14:40:27 json_config -- common/autotest_common.sh@829 -- # '[' -z 2675788 ']' 00:05:54.006 14:40:27 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:54.006 14:40:27 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:54.006 14:40:27 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:54.006 14:40:27 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:54.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:54.006 14:40:27 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:54.006 14:40:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:54.006 [2024-07-15 14:40:27.825307] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:05:54.006 [2024-07-15 14:40:27.825352] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2675788 ] 00:05:54.006 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.263 [2024-07-15 14:40:28.093036] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.263 [2024-07-15 14:40:28.163932] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.827 14:40:28 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:54.827 14:40:28 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:54.827 14:40:28 json_config -- json_config/common.sh@26 -- # echo '' 00:05:54.827 00:05:54.827 14:40:28 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:54.827 14:40:28 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:54.827 14:40:28 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:54.827 14:40:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:54.827 14:40:28 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:54.827 14:40:28 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:54.827 14:40:28 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:54.827 14:40:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:54.827 14:40:28 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:54.827 14:40:28 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:54.827 14:40:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:58.101 14:40:31 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:58.101 14:40:31 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:58.101 14:40:31 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:58.101 14:40:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:58.101 14:40:31 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:58.101 14:40:31 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:58.101 14:40:31 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:58.101 14:40:31 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:58.101 14:40:31 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:58.101 14:40:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:58.101 14:40:31 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:58.101 14:40:31 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:58.101 14:40:31 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:58.101 14:40:31 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:58.101 14:40:31 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:58.101 14:40:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:58.101 14:40:31 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:58.101 14:40:31 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:58.101 14:40:31 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:58.101 14:40:31 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:58.101 14:40:31 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:58.101 14:40:31 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:58.101 14:40:31 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:58.101 14:40:31 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:58.101 14:40:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:58.101 14:40:31 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:58.101 14:40:31 json_config -- json_config/json_config.sh@233 -- # [[ rdma == \r\d\m\a ]] 00:05:58.101 14:40:31 json_config -- json_config/json_config.sh@234 -- # TEST_TRANSPORT=rdma 00:05:58.101 14:40:31 json_config -- json_config/json_config.sh@234 -- # nvmftestinit 00:05:58.101 14:40:31 json_config -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:05:58.101 14:40:31 json_config -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:58.101 14:40:31 json_config -- nvmf/common.sh@448 -- # prepare_net_devs 00:05:58.102 14:40:31 json_config -- nvmf/common.sh@410 -- # local -g is_hw=no 00:05:58.102 14:40:31 json_config -- nvmf/common.sh@412 -- # remove_spdk_ns 00:05:58.102 14:40:31 json_config -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:58.102 14:40:31 json_config -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:05:58.102 14:40:31 json_config -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:58.102 14:40:31 json_config -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:05:58.102 14:40:31 json_config -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:05:58.102 14:40:31 json_config -- nvmf/common.sh@285 -- # xtrace_disable 00:05:58.102 14:40:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:03.345 14:40:37 json_config -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:03.345 14:40:37 json_config -- nvmf/common.sh@291 -- # pci_devs=() 00:06:03.345 14:40:37 json_config -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:03.345 14:40:37 json_config -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:03.345 14:40:37 json_config -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:03.345 14:40:37 json_config -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:03.345 14:40:37 json_config -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:03.345 14:40:37 json_config -- nvmf/common.sh@295 -- # net_devs=() 00:06:03.345 14:40:37 json_config -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:03.345 14:40:37 json_config -- nvmf/common.sh@296 -- # e810=() 00:06:03.345 14:40:37 json_config -- nvmf/common.sh@296 -- # local -ga e810 00:06:03.345 14:40:37 json_config -- nvmf/common.sh@297 -- # x722=() 00:06:03.345 14:40:37 json_config -- nvmf/common.sh@297 -- # local -ga x722 00:06:03.345 14:40:37 json_config -- nvmf/common.sh@298 -- # mlx=() 00:06:03.345 14:40:37 json_config -- nvmf/common.sh@298 -- # local -ga mlx 00:06:03.345 14:40:37 json_config -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:03.345 14:40:37 json_config -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:03.345 14:40:37 json_config -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:03.345 14:40:37 json_config -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:03.345 14:40:37 json_config -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:03.345 14:40:37 json_config -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:03.345 14:40:37 json_config -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:03.345 14:40:37 json_config -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:03.345 14:40:37 json_config -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:03.345 14:40:37 json_config -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:03.345 14:40:37 json_config -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:03.345 14:40:37 json_config -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:03.345 14:40:37 json_config -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:06:03.345 14:40:37 json_config -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:06:03.345 14:40:37 json_config -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:06:03.345 14:40:37 json_config -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:06:03.345 14:40:37 json_config -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:06:03.345 14:40:37 json_config -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:03.345 14:40:37 json_config -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:03.345 14:40:37 json_config -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:06:03.345 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:06:03.345 14:40:37 json_config -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:06:03.345 14:40:37 json_config -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:06:03.345 14:40:37 json_config -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:06:03.345 14:40:37 json_config -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:06:03.345 14:40:37 json_config -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:06:03.345 14:40:37 json_config -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:06:03.345 14:40:37 json_config -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:03.345 14:40:37 json_config -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:06:03.345 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:06:03.345 14:40:37 json_config -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:06:03.345 14:40:37 json_config -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:06:03.345 14:40:37 json_config -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:06:03.345 14:40:37 json_config -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:06:03.345 14:40:37 json_config -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:06:03.345 14:40:37 json_config -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:06:03.345 14:40:37 json_config -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:03.345 14:40:37 json_config -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:06:03.345 14:40:37 json_config -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:03.345 14:40:37 json_config -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:03.345 14:40:37 json_config -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:06:03.345 14:40:37 json_config -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:03.345 14:40:37 json_config -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:03.345 14:40:37 json_config -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:06:03.345 Found net devices under 0000:da:00.0: mlx_0_0 00:06:03.345 14:40:37 json_config -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:03.345 14:40:37 json_config -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:03.345 14:40:37 json_config -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:03.345 14:40:37 json_config -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:06:03.345 14:40:37 json_config -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:03.345 14:40:37 json_config -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:03.345 14:40:37 json_config -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:06:03.345 Found net devices under 0000:da:00.1: mlx_0_1 00:06:03.345 14:40:37 json_config -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:03.345 14:40:37 json_config -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:03.345 14:40:37 json_config -- nvmf/common.sh@414 -- # is_hw=yes 00:06:03.345 14:40:37 json_config -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:03.345 14:40:37 json_config -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:06:03.345 14:40:37 json_config -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:06:03.345 14:40:37 json_config -- nvmf/common.sh@420 -- # rdma_device_init 00:06:03.345 14:40:37 json_config -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:06:03.345 14:40:37 json_config -- nvmf/common.sh@58 -- # uname 00:06:03.345 14:40:37 json_config -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:06:03.345 14:40:37 json_config -- nvmf/common.sh@62 -- # modprobe ib_cm 00:06:03.346 14:40:37 json_config -- nvmf/common.sh@63 -- # modprobe ib_core 00:06:03.346 14:40:37 json_config -- nvmf/common.sh@64 -- # modprobe ib_umad 00:06:03.346 14:40:37 json_config -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:06:03.346 14:40:37 json_config -- nvmf/common.sh@66 -- # modprobe iw_cm 00:06:03.346 14:40:37 json_config -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:06:03.346 14:40:37 json_config -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:06:03.346 14:40:37 json_config -- nvmf/common.sh@502 -- # allocate_nic_ips 00:06:03.346 14:40:37 json_config -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:06:03.346 14:40:37 json_config -- nvmf/common.sh@73 -- # get_rdma_if_list 00:06:03.346 14:40:37 json_config -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:03.346 14:40:37 json_config -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:06:03.346 14:40:37 json_config -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:06:03.346 14:40:37 json_config -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:03.602 14:40:37 json_config -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:06:03.602 14:40:37 json_config -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:06:03.602 14:40:37 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:03.602 14:40:37 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:03.602 14:40:37 json_config -- nvmf/common.sh@104 -- # echo mlx_0_0 00:06:03.602 14:40:37 json_config -- nvmf/common.sh@105 -- # continue 2 00:06:03.602 14:40:37 json_config -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:06:03.602 14:40:37 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:03.602 14:40:37 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:03.602 14:40:37 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:03.602 14:40:37 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:03.602 14:40:37 json_config -- nvmf/common.sh@104 -- # echo mlx_0_1 00:06:03.602 14:40:37 json_config -- nvmf/common.sh@105 -- # continue 2 00:06:03.602 14:40:37 json_config -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:06:03.602 14:40:37 json_config -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:06:03.602 14:40:37 json_config -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:06:03.602 14:40:37 json_config -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:06:03.602 14:40:37 json_config -- nvmf/common.sh@113 -- # awk '{print $4}' 00:06:03.602 14:40:37 json_config -- nvmf/common.sh@113 -- # cut -d/ -f1 00:06:03.602 14:40:37 json_config -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:06:03.602 14:40:37 json_config -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:06:03.602 14:40:37 json_config -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:06:03.602 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:03.602 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:06:03.602 altname enp218s0f0np0 00:06:03.602 altname ens818f0np0 00:06:03.602 inet 192.168.100.8/24 scope global mlx_0_0 00:06:03.602 valid_lft forever preferred_lft forever 00:06:03.602 14:40:37 json_config -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:06:03.602 14:40:37 json_config -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:06:03.602 14:40:37 json_config -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:06:03.602 14:40:37 json_config -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:06:03.602 14:40:37 json_config -- nvmf/common.sh@113 -- # awk '{print $4}' 00:06:03.602 14:40:37 json_config -- nvmf/common.sh@113 -- # cut -d/ -f1 00:06:03.602 14:40:37 json_config -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:06:03.602 14:40:37 json_config -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:06:03.602 14:40:37 json_config -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:06:03.602 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:03.602 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:06:03.602 altname enp218s0f1np1 00:06:03.602 altname ens818f1np1 00:06:03.602 inet 192.168.100.9/24 scope global mlx_0_1 00:06:03.602 valid_lft forever preferred_lft forever 00:06:03.602 14:40:37 json_config -- nvmf/common.sh@422 -- # return 0 00:06:03.602 14:40:37 json_config -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:03.602 14:40:37 json_config -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:06:03.602 14:40:37 json_config -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:06:03.602 14:40:37 json_config -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:06:03.602 14:40:37 json_config -- nvmf/common.sh@86 -- # get_rdma_if_list 00:06:03.602 14:40:37 json_config -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:03.602 14:40:37 json_config -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:06:03.602 14:40:37 json_config -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:06:03.602 14:40:37 json_config -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:03.602 14:40:37 json_config -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:06:03.602 14:40:37 json_config -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:06:03.602 14:40:37 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:03.602 14:40:37 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:03.602 14:40:37 json_config -- nvmf/common.sh@104 -- # echo mlx_0_0 00:06:03.602 14:40:37 json_config -- nvmf/common.sh@105 -- # continue 2 00:06:03.602 14:40:37 json_config -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:06:03.602 14:40:37 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:03.602 14:40:37 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:03.602 14:40:37 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:03.602 14:40:37 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:03.602 14:40:37 json_config -- nvmf/common.sh@104 -- # echo mlx_0_1 00:06:03.602 14:40:37 json_config -- nvmf/common.sh@105 -- # continue 2 00:06:03.602 14:40:37 json_config -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:06:03.602 14:40:37 json_config -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:06:03.602 14:40:37 json_config -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:06:03.602 14:40:37 json_config -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:06:03.602 14:40:37 json_config -- nvmf/common.sh@113 -- # awk '{print $4}' 00:06:03.602 14:40:37 json_config -- nvmf/common.sh@113 -- # cut -d/ -f1 00:06:03.602 14:40:37 json_config -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:06:03.602 14:40:37 json_config -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:06:03.602 14:40:37 json_config -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:06:03.602 14:40:37 json_config -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:06:03.602 14:40:37 json_config -- nvmf/common.sh@113 -- # awk '{print $4}' 00:06:03.602 14:40:37 json_config -- nvmf/common.sh@113 -- # cut -d/ -f1 00:06:03.602 14:40:37 json_config -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:06:03.602 192.168.100.9' 00:06:03.602 14:40:37 json_config -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:06:03.602 192.168.100.9' 00:06:03.602 14:40:37 json_config -- nvmf/common.sh@457 -- # head -n 1 00:06:03.602 14:40:37 json_config -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:06:03.602 14:40:37 json_config -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:06:03.602 192.168.100.9' 00:06:03.602 14:40:37 json_config -- nvmf/common.sh@458 -- # tail -n +2 00:06:03.602 14:40:37 json_config -- nvmf/common.sh@458 -- # head -n 1 00:06:03.602 14:40:37 json_config -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:06:03.602 14:40:37 json_config -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:06:03.602 14:40:37 json_config -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:06:03.602 14:40:37 json_config -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:06:03.602 14:40:37 json_config -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:06:03.602 14:40:37 json_config -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:06:03.602 14:40:37 json_config -- json_config/json_config.sh@237 -- # [[ -z 192.168.100.8 ]] 00:06:03.602 14:40:37 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:03.602 14:40:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:03.858 MallocForNvmf0 00:06:03.858 14:40:37 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:03.858 14:40:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:03.858 MallocForNvmf1 00:06:03.858 14:40:37 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t rdma -u 8192 -c 0 00:06:03.858 14:40:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t rdma -u 8192 -c 0 00:06:04.114 [2024-07-15 14:40:37.906180] rdma.c:2731:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:06:04.114 [2024-07-15 14:40:37.938633] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x239fb90/0x24ccd00) succeed. 00:06:04.114 [2024-07-15 14:40:37.951096] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x23a1d80/0x23acbc0) succeed. 00:06:04.114 14:40:37 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:04.114 14:40:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:04.371 14:40:38 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:04.371 14:40:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:04.628 14:40:38 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:04.628 14:40:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:04.628 14:40:38 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:06:04.628 14:40:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:06:04.884 [2024-07-15 14:40:38.639879] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:06:04.884 14:40:38 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:06:04.884 14:40:38 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:04.884 14:40:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:04.884 14:40:38 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:06:04.884 14:40:38 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:04.884 14:40:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:04.884 14:40:38 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:06:04.884 14:40:38 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:04.884 14:40:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:05.140 MallocBdevForConfigChangeCheck 00:06:05.140 14:40:38 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:06:05.140 14:40:38 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:05.140 14:40:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:05.140 14:40:38 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:06:05.140 14:40:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:05.396 14:40:39 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:06:05.396 INFO: shutting down applications... 00:06:05.396 14:40:39 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:06:05.396 14:40:39 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:06:05.396 14:40:39 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:06:05.396 14:40:39 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:07.913 Calling clear_iscsi_subsystem 00:06:07.913 Calling clear_nvmf_subsystem 00:06:07.913 Calling clear_nbd_subsystem 00:06:07.913 Calling clear_ublk_subsystem 00:06:07.913 Calling clear_vhost_blk_subsystem 00:06:07.913 Calling clear_vhost_scsi_subsystem 00:06:07.913 Calling clear_bdev_subsystem 00:06:07.913 14:40:41 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py 00:06:07.913 14:40:41 json_config -- json_config/json_config.sh@343 -- # count=100 00:06:07.913 14:40:41 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:06:07.913 14:40:41 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:07.913 14:40:41 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:07.913 14:40:41 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:07.913 14:40:41 json_config -- json_config/json_config.sh@345 -- # break 00:06:07.913 14:40:41 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:06:07.913 14:40:41 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:06:07.913 14:40:41 json_config -- json_config/common.sh@31 -- # local app=target 00:06:07.913 14:40:41 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:07.913 14:40:41 json_config -- json_config/common.sh@35 -- # [[ -n 2675788 ]] 00:06:07.913 14:40:41 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2675788 00:06:07.913 14:40:41 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:07.913 14:40:41 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:07.913 14:40:41 json_config -- json_config/common.sh@41 -- # kill -0 2675788 00:06:07.913 14:40:41 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:08.478 14:40:42 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:08.478 14:40:42 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:08.478 14:40:42 json_config -- json_config/common.sh@41 -- # kill -0 2675788 00:06:08.478 14:40:42 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:08.478 14:40:42 json_config -- json_config/common.sh@43 -- # break 00:06:08.478 14:40:42 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:08.478 14:40:42 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:08.478 SPDK target shutdown done 00:06:08.478 14:40:42 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:06:08.478 INFO: relaunching applications... 00:06:08.478 14:40:42 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:08.478 14:40:42 json_config -- json_config/common.sh@9 -- # local app=target 00:06:08.478 14:40:42 json_config -- json_config/common.sh@10 -- # shift 00:06:08.478 14:40:42 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:08.478 14:40:42 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:08.478 14:40:42 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:08.478 14:40:42 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:08.478 14:40:42 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:08.478 14:40:42 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2680512 00:06:08.478 14:40:42 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:08.478 Waiting for target to run... 00:06:08.478 14:40:42 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:08.478 14:40:42 json_config -- json_config/common.sh@25 -- # waitforlisten 2680512 /var/tmp/spdk_tgt.sock 00:06:08.478 14:40:42 json_config -- common/autotest_common.sh@829 -- # '[' -z 2680512 ']' 00:06:08.478 14:40:42 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:08.478 14:40:42 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:08.478 14:40:42 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:08.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:08.478 14:40:42 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:08.478 14:40:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:08.478 [2024-07-15 14:40:42.259289] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:06:08.478 [2024-07-15 14:40:42.259345] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2680512 ] 00:06:08.478 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.043 [2024-07-15 14:40:42.697751] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.043 [2024-07-15 14:40:42.789134] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.318 [2024-07-15 14:40:45.830956] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xefb9a0/0xf28280) succeed. 00:06:12.318 [2024-07-15 14:40:45.841657] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xefdb90/0xf88260) succeed. 00:06:12.318 [2024-07-15 14:40:45.890426] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:06:12.576 14:40:46 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:12.576 14:40:46 json_config -- common/autotest_common.sh@862 -- # return 0 00:06:12.576 14:40:46 json_config -- json_config/common.sh@26 -- # echo '' 00:06:12.576 00:06:12.576 14:40:46 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:06:12.576 14:40:46 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:12.576 INFO: Checking if target configuration is the same... 00:06:12.576 14:40:46 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:06:12.576 14:40:46 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:12.576 14:40:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:12.576 + '[' 2 -ne 2 ']' 00:06:12.576 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:12.576 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:06:12.576 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:06:12.576 +++ basename /dev/fd/62 00:06:12.576 ++ mktemp /tmp/62.XXX 00:06:12.576 + tmp_file_1=/tmp/62.JEr 00:06:12.576 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:12.576 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:12.576 + tmp_file_2=/tmp/spdk_tgt_config.json.EJN 00:06:12.576 + ret=0 00:06:12.576 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:12.833 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:12.833 + diff -u /tmp/62.JEr /tmp/spdk_tgt_config.json.EJN 00:06:12.833 + echo 'INFO: JSON config files are the same' 00:06:12.833 INFO: JSON config files are the same 00:06:12.833 + rm /tmp/62.JEr /tmp/spdk_tgt_config.json.EJN 00:06:12.833 + exit 0 00:06:12.833 14:40:46 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:06:12.833 14:40:46 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:12.833 INFO: changing configuration and checking if this can be detected... 00:06:12.833 14:40:46 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:12.834 14:40:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:13.091 14:40:46 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:13.091 14:40:46 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:06:13.091 14:40:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:13.091 + '[' 2 -ne 2 ']' 00:06:13.091 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:13.091 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:06:13.091 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:06:13.091 +++ basename /dev/fd/62 00:06:13.091 ++ mktemp /tmp/62.XXX 00:06:13.091 + tmp_file_1=/tmp/62.8DV 00:06:13.091 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:13.091 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:13.091 + tmp_file_2=/tmp/spdk_tgt_config.json.jLg 00:06:13.091 + ret=0 00:06:13.091 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:13.348 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:13.606 + diff -u /tmp/62.8DV /tmp/spdk_tgt_config.json.jLg 00:06:13.606 + ret=1 00:06:13.606 + echo '=== Start of file: /tmp/62.8DV ===' 00:06:13.606 + cat /tmp/62.8DV 00:06:13.606 + echo '=== End of file: /tmp/62.8DV ===' 00:06:13.606 + echo '' 00:06:13.606 + echo '=== Start of file: /tmp/spdk_tgt_config.json.jLg ===' 00:06:13.606 + cat /tmp/spdk_tgt_config.json.jLg 00:06:13.606 + echo '=== End of file: /tmp/spdk_tgt_config.json.jLg ===' 00:06:13.607 + echo '' 00:06:13.607 + rm /tmp/62.8DV /tmp/spdk_tgt_config.json.jLg 00:06:13.607 + exit 1 00:06:13.607 14:40:47 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:06:13.607 INFO: configuration change detected. 00:06:13.607 14:40:47 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:06:13.607 14:40:47 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:06:13.607 14:40:47 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:13.607 14:40:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:13.607 14:40:47 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:06:13.607 14:40:47 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:06:13.607 14:40:47 json_config -- json_config/json_config.sh@317 -- # [[ -n 2680512 ]] 00:06:13.607 14:40:47 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:06:13.607 14:40:47 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:06:13.607 14:40:47 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:13.607 14:40:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:13.607 14:40:47 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:06:13.607 14:40:47 json_config -- json_config/json_config.sh@193 -- # uname -s 00:06:13.607 14:40:47 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:06:13.607 14:40:47 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:06:13.607 14:40:47 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:06:13.607 14:40:47 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:06:13.607 14:40:47 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:13.607 14:40:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:13.607 14:40:47 json_config -- json_config/json_config.sh@323 -- # killprocess 2680512 00:06:13.607 14:40:47 json_config -- common/autotest_common.sh@948 -- # '[' -z 2680512 ']' 00:06:13.607 14:40:47 json_config -- common/autotest_common.sh@952 -- # kill -0 2680512 00:06:13.607 14:40:47 json_config -- common/autotest_common.sh@953 -- # uname 00:06:13.607 14:40:47 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:13.607 14:40:47 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2680512 00:06:13.607 14:40:47 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:13.607 14:40:47 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:13.607 14:40:47 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2680512' 00:06:13.607 killing process with pid 2680512 00:06:13.607 14:40:47 json_config -- common/autotest_common.sh@967 -- # kill 2680512 00:06:13.607 14:40:47 json_config -- common/autotest_common.sh@972 -- # wait 2680512 00:06:16.131 14:40:49 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:16.131 14:40:49 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:06:16.131 14:40:49 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:16.131 14:40:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:16.131 14:40:49 json_config -- json_config/json_config.sh@328 -- # return 0 00:06:16.131 14:40:49 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:06:16.131 INFO: Success 00:06:16.131 14:40:49 json_config -- json_config/json_config.sh@1 -- # nvmftestfini 00:06:16.131 14:40:49 json_config -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:16.131 14:40:49 json_config -- nvmf/common.sh@117 -- # sync 00:06:16.131 14:40:49 json_config -- nvmf/common.sh@119 -- # '[' '' == tcp ']' 00:06:16.131 14:40:49 json_config -- nvmf/common.sh@119 -- # '[' '' == rdma ']' 00:06:16.131 14:40:49 json_config -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:06:16.131 14:40:49 json_config -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:16.131 14:40:49 json_config -- nvmf/common.sh@495 -- # [[ '' == \t\c\p ]] 00:06:16.131 00:06:16.131 real 0m21.908s 00:06:16.131 user 0m24.140s 00:06:16.131 sys 0m5.967s 00:06:16.131 14:40:49 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:16.131 14:40:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:16.131 ************************************ 00:06:16.131 END TEST json_config 00:06:16.131 ************************************ 00:06:16.131 14:40:49 -- common/autotest_common.sh@1142 -- # return 0 00:06:16.131 14:40:49 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:16.131 14:40:49 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:16.131 14:40:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.131 14:40:49 -- common/autotest_common.sh@10 -- # set +x 00:06:16.131 ************************************ 00:06:16.131 START TEST json_config_extra_key 00:06:16.131 ************************************ 00:06:16.131 14:40:49 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:16.131 14:40:49 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:06:16.131 14:40:49 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:16.131 14:40:49 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:16.131 14:40:49 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:16.131 14:40:49 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:16.131 14:40:49 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:16.131 14:40:49 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:16.131 14:40:49 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:16.131 14:40:49 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:16.131 14:40:49 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:16.131 14:40:49 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:16.131 14:40:49 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:16.131 14:40:49 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:06:16.131 14:40:49 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:06:16.131 14:40:49 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:16.131 14:40:49 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:16.131 14:40:49 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:16.131 14:40:49 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:16.131 14:40:49 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:06:16.131 14:40:49 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:16.131 14:40:49 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:16.131 14:40:49 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:16.131 14:40:49 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.131 14:40:49 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.131 14:40:49 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.131 14:40:49 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:16.131 14:40:49 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.131 14:40:49 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:16.131 14:40:49 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:16.131 14:40:49 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:16.131 14:40:49 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:16.131 14:40:49 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:16.131 14:40:49 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:16.131 14:40:49 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:16.131 14:40:49 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:16.131 14:40:49 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:16.131 14:40:49 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:06:16.131 14:40:49 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:16.131 14:40:49 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:16.131 14:40:49 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:16.131 14:40:49 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:16.131 14:40:49 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:16.131 14:40:49 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:16.132 14:40:49 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:16.132 14:40:49 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:16.132 14:40:49 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:16.132 14:40:49 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:16.132 INFO: launching applications... 00:06:16.132 14:40:49 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:06:16.132 14:40:49 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:16.132 14:40:49 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:16.132 14:40:49 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:16.132 14:40:49 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:16.132 14:40:49 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:16.132 14:40:49 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:16.132 14:40:49 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:16.132 14:40:49 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2681923 00:06:16.132 14:40:49 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:16.132 Waiting for target to run... 00:06:16.132 14:40:49 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2681923 /var/tmp/spdk_tgt.sock 00:06:16.132 14:40:49 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 2681923 ']' 00:06:16.132 14:40:49 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:06:16.132 14:40:49 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:16.132 14:40:49 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:16.132 14:40:49 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:16.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:16.132 14:40:49 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:16.132 14:40:49 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:16.132 [2024-07-15 14:40:49.792802] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:06:16.132 [2024-07-15 14:40:49.792854] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2681923 ] 00:06:16.132 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.389 [2024-07-15 14:40:50.064906] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.389 [2024-07-15 14:40:50.143309] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.954 14:40:50 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:16.954 14:40:50 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:06:16.954 14:40:50 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:16.954 00:06:16.954 14:40:50 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:16.954 INFO: shutting down applications... 00:06:16.954 14:40:50 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:16.954 14:40:50 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:16.954 14:40:50 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:16.954 14:40:50 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2681923 ]] 00:06:16.954 14:40:50 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2681923 00:06:16.954 14:40:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:16.954 14:40:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:16.954 14:40:50 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2681923 00:06:16.954 14:40:50 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:17.211 14:40:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:17.211 14:40:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:17.211 14:40:51 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2681923 00:06:17.211 14:40:51 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:17.211 14:40:51 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:17.211 14:40:51 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:17.211 14:40:51 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:17.211 SPDK target shutdown done 00:06:17.211 14:40:51 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:17.211 Success 00:06:17.211 00:06:17.211 real 0m1.446s 00:06:17.211 user 0m1.233s 00:06:17.211 sys 0m0.362s 00:06:17.211 14:40:51 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:17.211 14:40:51 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:17.211 ************************************ 00:06:17.211 END TEST json_config_extra_key 00:06:17.211 ************************************ 00:06:17.211 14:40:51 -- common/autotest_common.sh@1142 -- # return 0 00:06:17.211 14:40:51 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:17.211 14:40:51 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:17.211 14:40:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.211 14:40:51 -- common/autotest_common.sh@10 -- # set +x 00:06:17.469 ************************************ 00:06:17.469 START TEST alias_rpc 00:06:17.469 ************************************ 00:06:17.469 14:40:51 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:17.469 * Looking for test storage... 00:06:17.469 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc 00:06:17.469 14:40:51 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:17.469 14:40:51 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:06:17.469 14:40:51 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2682272 00:06:17.469 14:40:51 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2682272 00:06:17.469 14:40:51 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 2682272 ']' 00:06:17.469 14:40:51 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.469 14:40:51 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:17.469 14:40:51 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.469 14:40:51 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:17.469 14:40:51 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.469 [2024-07-15 14:40:51.279377] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:06:17.469 [2024-07-15 14:40:51.279431] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2682272 ] 00:06:17.469 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.469 [2024-07-15 14:40:51.328619] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.726 [2024-07-15 14:40:51.411214] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.291 14:40:52 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:18.291 14:40:52 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:18.291 14:40:52 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:18.561 14:40:52 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2682272 00:06:18.561 14:40:52 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 2682272 ']' 00:06:18.561 14:40:52 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 2682272 00:06:18.561 14:40:52 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:06:18.561 14:40:52 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:18.561 14:40:52 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2682272 00:06:18.561 14:40:52 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:18.562 14:40:52 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:18.562 14:40:52 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2682272' 00:06:18.562 killing process with pid 2682272 00:06:18.562 14:40:52 alias_rpc -- common/autotest_common.sh@967 -- # kill 2682272 00:06:18.562 14:40:52 alias_rpc -- common/autotest_common.sh@972 -- # wait 2682272 00:06:18.820 00:06:18.820 real 0m1.463s 00:06:18.820 user 0m1.616s 00:06:18.820 sys 0m0.372s 00:06:18.820 14:40:52 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:18.820 14:40:52 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.820 ************************************ 00:06:18.820 END TEST alias_rpc 00:06:18.820 ************************************ 00:06:18.820 14:40:52 -- common/autotest_common.sh@1142 -- # return 0 00:06:18.820 14:40:52 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:18.820 14:40:52 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:18.820 14:40:52 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:18.820 14:40:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.820 14:40:52 -- common/autotest_common.sh@10 -- # set +x 00:06:18.820 ************************************ 00:06:18.820 START TEST spdkcli_tcp 00:06:18.820 ************************************ 00:06:18.820 14:40:52 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:19.077 * Looking for test storage... 00:06:19.077 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:06:19.077 14:40:52 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:06:19.077 14:40:52 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:19.077 14:40:52 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:06:19.078 14:40:52 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:19.078 14:40:52 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:19.078 14:40:52 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:19.078 14:40:52 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:19.078 14:40:52 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:19.078 14:40:52 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:19.078 14:40:52 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2682575 00:06:19.078 14:40:52 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2682575 00:06:19.078 14:40:52 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 2682575 ']' 00:06:19.078 14:40:52 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.078 14:40:52 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:19.078 14:40:52 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:19.078 14:40:52 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.078 14:40:52 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:19.078 14:40:52 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:19.078 [2024-07-15 14:40:52.829895] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:06:19.078 [2024-07-15 14:40:52.829936] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2682575 ] 00:06:19.078 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.078 [2024-07-15 14:40:52.883446] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:19.078 [2024-07-15 14:40:52.963862] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:19.078 [2024-07-15 14:40:52.963866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.008 14:40:53 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:20.008 14:40:53 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:06:20.008 14:40:53 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2682595 00:06:20.008 14:40:53 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:20.008 14:40:53 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:20.008 [ 00:06:20.008 "bdev_malloc_delete", 00:06:20.008 "bdev_malloc_create", 00:06:20.008 "bdev_null_resize", 00:06:20.008 "bdev_null_delete", 00:06:20.008 "bdev_null_create", 00:06:20.008 "bdev_nvme_cuse_unregister", 00:06:20.008 "bdev_nvme_cuse_register", 00:06:20.008 "bdev_opal_new_user", 00:06:20.008 "bdev_opal_set_lock_state", 00:06:20.008 "bdev_opal_delete", 00:06:20.008 "bdev_opal_get_info", 00:06:20.008 "bdev_opal_create", 00:06:20.008 "bdev_nvme_opal_revert", 00:06:20.008 "bdev_nvme_opal_init", 00:06:20.008 "bdev_nvme_send_cmd", 00:06:20.008 "bdev_nvme_get_path_iostat", 00:06:20.008 "bdev_nvme_get_mdns_discovery_info", 00:06:20.008 "bdev_nvme_stop_mdns_discovery", 00:06:20.008 "bdev_nvme_start_mdns_discovery", 00:06:20.008 "bdev_nvme_set_multipath_policy", 00:06:20.008 "bdev_nvme_set_preferred_path", 00:06:20.008 "bdev_nvme_get_io_paths", 00:06:20.008 "bdev_nvme_remove_error_injection", 00:06:20.008 "bdev_nvme_add_error_injection", 00:06:20.009 "bdev_nvme_get_discovery_info", 00:06:20.009 "bdev_nvme_stop_discovery", 00:06:20.009 "bdev_nvme_start_discovery", 00:06:20.009 "bdev_nvme_get_controller_health_info", 00:06:20.009 "bdev_nvme_disable_controller", 00:06:20.009 "bdev_nvme_enable_controller", 00:06:20.009 "bdev_nvme_reset_controller", 00:06:20.009 "bdev_nvme_get_transport_statistics", 00:06:20.009 "bdev_nvme_apply_firmware", 00:06:20.009 "bdev_nvme_detach_controller", 00:06:20.009 "bdev_nvme_get_controllers", 00:06:20.009 "bdev_nvme_attach_controller", 00:06:20.009 "bdev_nvme_set_hotplug", 00:06:20.009 "bdev_nvme_set_options", 00:06:20.009 "bdev_passthru_delete", 00:06:20.009 "bdev_passthru_create", 00:06:20.009 "bdev_lvol_set_parent_bdev", 00:06:20.009 "bdev_lvol_set_parent", 00:06:20.009 "bdev_lvol_check_shallow_copy", 00:06:20.009 "bdev_lvol_start_shallow_copy", 00:06:20.009 "bdev_lvol_grow_lvstore", 00:06:20.009 "bdev_lvol_get_lvols", 00:06:20.009 "bdev_lvol_get_lvstores", 00:06:20.009 "bdev_lvol_delete", 00:06:20.009 "bdev_lvol_set_read_only", 00:06:20.009 "bdev_lvol_resize", 00:06:20.009 "bdev_lvol_decouple_parent", 00:06:20.009 "bdev_lvol_inflate", 00:06:20.009 "bdev_lvol_rename", 00:06:20.009 "bdev_lvol_clone_bdev", 00:06:20.009 "bdev_lvol_clone", 00:06:20.009 "bdev_lvol_snapshot", 00:06:20.009 "bdev_lvol_create", 00:06:20.009 "bdev_lvol_delete_lvstore", 00:06:20.009 "bdev_lvol_rename_lvstore", 00:06:20.009 "bdev_lvol_create_lvstore", 00:06:20.009 "bdev_raid_set_options", 00:06:20.009 "bdev_raid_remove_base_bdev", 00:06:20.009 "bdev_raid_add_base_bdev", 00:06:20.009 "bdev_raid_delete", 00:06:20.009 "bdev_raid_create", 00:06:20.009 "bdev_raid_get_bdevs", 00:06:20.009 "bdev_error_inject_error", 00:06:20.009 "bdev_error_delete", 00:06:20.009 "bdev_error_create", 00:06:20.009 "bdev_split_delete", 00:06:20.009 "bdev_split_create", 00:06:20.009 "bdev_delay_delete", 00:06:20.009 "bdev_delay_create", 00:06:20.009 "bdev_delay_update_latency", 00:06:20.009 "bdev_zone_block_delete", 00:06:20.009 "bdev_zone_block_create", 00:06:20.009 "blobfs_create", 00:06:20.009 "blobfs_detect", 00:06:20.009 "blobfs_set_cache_size", 00:06:20.009 "bdev_aio_delete", 00:06:20.009 "bdev_aio_rescan", 00:06:20.009 "bdev_aio_create", 00:06:20.009 "bdev_ftl_set_property", 00:06:20.009 "bdev_ftl_get_properties", 00:06:20.009 "bdev_ftl_get_stats", 00:06:20.009 "bdev_ftl_unmap", 00:06:20.009 "bdev_ftl_unload", 00:06:20.009 "bdev_ftl_delete", 00:06:20.009 "bdev_ftl_load", 00:06:20.009 "bdev_ftl_create", 00:06:20.009 "bdev_virtio_attach_controller", 00:06:20.009 "bdev_virtio_scsi_get_devices", 00:06:20.009 "bdev_virtio_detach_controller", 00:06:20.009 "bdev_virtio_blk_set_hotplug", 00:06:20.009 "bdev_iscsi_delete", 00:06:20.009 "bdev_iscsi_create", 00:06:20.009 "bdev_iscsi_set_options", 00:06:20.009 "accel_error_inject_error", 00:06:20.009 "ioat_scan_accel_module", 00:06:20.009 "dsa_scan_accel_module", 00:06:20.009 "iaa_scan_accel_module", 00:06:20.009 "keyring_file_remove_key", 00:06:20.009 "keyring_file_add_key", 00:06:20.009 "keyring_linux_set_options", 00:06:20.009 "iscsi_get_histogram", 00:06:20.009 "iscsi_enable_histogram", 00:06:20.009 "iscsi_set_options", 00:06:20.009 "iscsi_get_auth_groups", 00:06:20.009 "iscsi_auth_group_remove_secret", 00:06:20.009 "iscsi_auth_group_add_secret", 00:06:20.009 "iscsi_delete_auth_group", 00:06:20.009 "iscsi_create_auth_group", 00:06:20.009 "iscsi_set_discovery_auth", 00:06:20.009 "iscsi_get_options", 00:06:20.009 "iscsi_target_node_request_logout", 00:06:20.009 "iscsi_target_node_set_redirect", 00:06:20.009 "iscsi_target_node_set_auth", 00:06:20.009 "iscsi_target_node_add_lun", 00:06:20.009 "iscsi_get_stats", 00:06:20.009 "iscsi_get_connections", 00:06:20.009 "iscsi_portal_group_set_auth", 00:06:20.009 "iscsi_start_portal_group", 00:06:20.009 "iscsi_delete_portal_group", 00:06:20.009 "iscsi_create_portal_group", 00:06:20.009 "iscsi_get_portal_groups", 00:06:20.009 "iscsi_delete_target_node", 00:06:20.009 "iscsi_target_node_remove_pg_ig_maps", 00:06:20.009 "iscsi_target_node_add_pg_ig_maps", 00:06:20.009 "iscsi_create_target_node", 00:06:20.009 "iscsi_get_target_nodes", 00:06:20.009 "iscsi_delete_initiator_group", 00:06:20.009 "iscsi_initiator_group_remove_initiators", 00:06:20.009 "iscsi_initiator_group_add_initiators", 00:06:20.009 "iscsi_create_initiator_group", 00:06:20.009 "iscsi_get_initiator_groups", 00:06:20.009 "nvmf_set_crdt", 00:06:20.009 "nvmf_set_config", 00:06:20.009 "nvmf_set_max_subsystems", 00:06:20.009 "nvmf_stop_mdns_prr", 00:06:20.009 "nvmf_publish_mdns_prr", 00:06:20.009 "nvmf_subsystem_get_listeners", 00:06:20.009 "nvmf_subsystem_get_qpairs", 00:06:20.009 "nvmf_subsystem_get_controllers", 00:06:20.009 "nvmf_get_stats", 00:06:20.009 "nvmf_get_transports", 00:06:20.009 "nvmf_create_transport", 00:06:20.009 "nvmf_get_targets", 00:06:20.009 "nvmf_delete_target", 00:06:20.009 "nvmf_create_target", 00:06:20.009 "nvmf_subsystem_allow_any_host", 00:06:20.009 "nvmf_subsystem_remove_host", 00:06:20.009 "nvmf_subsystem_add_host", 00:06:20.009 "nvmf_ns_remove_host", 00:06:20.009 "nvmf_ns_add_host", 00:06:20.009 "nvmf_subsystem_remove_ns", 00:06:20.009 "nvmf_subsystem_add_ns", 00:06:20.009 "nvmf_subsystem_listener_set_ana_state", 00:06:20.009 "nvmf_discovery_get_referrals", 00:06:20.009 "nvmf_discovery_remove_referral", 00:06:20.009 "nvmf_discovery_add_referral", 00:06:20.009 "nvmf_subsystem_remove_listener", 00:06:20.009 "nvmf_subsystem_add_listener", 00:06:20.009 "nvmf_delete_subsystem", 00:06:20.009 "nvmf_create_subsystem", 00:06:20.009 "nvmf_get_subsystems", 00:06:20.009 "env_dpdk_get_mem_stats", 00:06:20.009 "nbd_get_disks", 00:06:20.009 "nbd_stop_disk", 00:06:20.009 "nbd_start_disk", 00:06:20.009 "ublk_recover_disk", 00:06:20.009 "ublk_get_disks", 00:06:20.009 "ublk_stop_disk", 00:06:20.009 "ublk_start_disk", 00:06:20.009 "ublk_destroy_target", 00:06:20.009 "ublk_create_target", 00:06:20.009 "virtio_blk_create_transport", 00:06:20.009 "virtio_blk_get_transports", 00:06:20.009 "vhost_controller_set_coalescing", 00:06:20.009 "vhost_get_controllers", 00:06:20.009 "vhost_delete_controller", 00:06:20.009 "vhost_create_blk_controller", 00:06:20.009 "vhost_scsi_controller_remove_target", 00:06:20.009 "vhost_scsi_controller_add_target", 00:06:20.009 "vhost_start_scsi_controller", 00:06:20.009 "vhost_create_scsi_controller", 00:06:20.009 "thread_set_cpumask", 00:06:20.009 "framework_get_governor", 00:06:20.009 "framework_get_scheduler", 00:06:20.009 "framework_set_scheduler", 00:06:20.009 "framework_get_reactors", 00:06:20.009 "thread_get_io_channels", 00:06:20.009 "thread_get_pollers", 00:06:20.009 "thread_get_stats", 00:06:20.009 "framework_monitor_context_switch", 00:06:20.009 "spdk_kill_instance", 00:06:20.009 "log_enable_timestamps", 00:06:20.009 "log_get_flags", 00:06:20.009 "log_clear_flag", 00:06:20.009 "log_set_flag", 00:06:20.009 "log_get_level", 00:06:20.009 "log_set_level", 00:06:20.009 "log_get_print_level", 00:06:20.009 "log_set_print_level", 00:06:20.009 "framework_enable_cpumask_locks", 00:06:20.009 "framework_disable_cpumask_locks", 00:06:20.009 "framework_wait_init", 00:06:20.009 "framework_start_init", 00:06:20.009 "scsi_get_devices", 00:06:20.009 "bdev_get_histogram", 00:06:20.009 "bdev_enable_histogram", 00:06:20.009 "bdev_set_qos_limit", 00:06:20.009 "bdev_set_qd_sampling_period", 00:06:20.009 "bdev_get_bdevs", 00:06:20.009 "bdev_reset_iostat", 00:06:20.009 "bdev_get_iostat", 00:06:20.009 "bdev_examine", 00:06:20.009 "bdev_wait_for_examine", 00:06:20.009 "bdev_set_options", 00:06:20.009 "notify_get_notifications", 00:06:20.009 "notify_get_types", 00:06:20.009 "accel_get_stats", 00:06:20.009 "accel_set_options", 00:06:20.009 "accel_set_driver", 00:06:20.009 "accel_crypto_key_destroy", 00:06:20.009 "accel_crypto_keys_get", 00:06:20.009 "accel_crypto_key_create", 00:06:20.009 "accel_assign_opc", 00:06:20.009 "accel_get_module_info", 00:06:20.009 "accel_get_opc_assignments", 00:06:20.009 "vmd_rescan", 00:06:20.009 "vmd_remove_device", 00:06:20.009 "vmd_enable", 00:06:20.009 "sock_get_default_impl", 00:06:20.009 "sock_set_default_impl", 00:06:20.009 "sock_impl_set_options", 00:06:20.009 "sock_impl_get_options", 00:06:20.009 "iobuf_get_stats", 00:06:20.009 "iobuf_set_options", 00:06:20.009 "framework_get_pci_devices", 00:06:20.009 "framework_get_config", 00:06:20.009 "framework_get_subsystems", 00:06:20.009 "trace_get_info", 00:06:20.009 "trace_get_tpoint_group_mask", 00:06:20.009 "trace_disable_tpoint_group", 00:06:20.009 "trace_enable_tpoint_group", 00:06:20.009 "trace_clear_tpoint_mask", 00:06:20.009 "trace_set_tpoint_mask", 00:06:20.009 "keyring_get_keys", 00:06:20.009 "spdk_get_version", 00:06:20.009 "rpc_get_methods" 00:06:20.009 ] 00:06:20.009 14:40:53 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:20.009 14:40:53 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:20.009 14:40:53 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:20.009 14:40:53 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:20.009 14:40:53 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2682575 00:06:20.009 14:40:53 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 2682575 ']' 00:06:20.009 14:40:53 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 2682575 00:06:20.009 14:40:53 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:06:20.009 14:40:53 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:20.009 14:40:53 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2682575 00:06:20.009 14:40:53 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:20.009 14:40:53 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:20.010 14:40:53 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2682575' 00:06:20.010 killing process with pid 2682575 00:06:20.010 14:40:53 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 2682575 00:06:20.010 14:40:53 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 2682575 00:06:20.574 00:06:20.574 real 0m1.496s 00:06:20.574 user 0m2.791s 00:06:20.574 sys 0m0.422s 00:06:20.574 14:40:54 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.574 14:40:54 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:20.574 ************************************ 00:06:20.574 END TEST spdkcli_tcp 00:06:20.574 ************************************ 00:06:20.574 14:40:54 -- common/autotest_common.sh@1142 -- # return 0 00:06:20.574 14:40:54 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:20.574 14:40:54 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:20.574 14:40:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.574 14:40:54 -- common/autotest_common.sh@10 -- # set +x 00:06:20.574 ************************************ 00:06:20.574 START TEST dpdk_mem_utility 00:06:20.574 ************************************ 00:06:20.574 14:40:54 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:20.574 * Looking for test storage... 00:06:20.574 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility 00:06:20.574 14:40:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:20.574 14:40:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2682880 00:06:20.574 14:40:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:06:20.574 14:40:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2682880 00:06:20.574 14:40:54 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 2682880 ']' 00:06:20.574 14:40:54 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.574 14:40:54 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:20.574 14:40:54 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.574 14:40:54 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:20.574 14:40:54 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:20.574 [2024-07-15 14:40:54.392794] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:06:20.574 [2024-07-15 14:40:54.392850] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2682880 ] 00:06:20.574 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.574 [2024-07-15 14:40:54.447373] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.830 [2024-07-15 14:40:54.527656] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.394 14:40:55 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:21.394 14:40:55 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:06:21.394 14:40:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:21.394 14:40:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:21.394 14:40:55 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.394 14:40:55 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:21.394 { 00:06:21.394 "filename": "/tmp/spdk_mem_dump.txt" 00:06:21.394 } 00:06:21.394 14:40:55 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.394 14:40:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:21.394 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:21.394 1 heaps totaling size 814.000000 MiB 00:06:21.394 size: 814.000000 MiB heap id: 0 00:06:21.394 end heaps---------- 00:06:21.394 8 mempools totaling size 598.116089 MiB 00:06:21.394 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:21.394 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:21.394 size: 84.521057 MiB name: bdev_io_2682880 00:06:21.394 size: 51.011292 MiB name: evtpool_2682880 00:06:21.394 size: 50.003479 MiB name: msgpool_2682880 00:06:21.394 size: 21.763794 MiB name: PDU_Pool 00:06:21.394 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:21.394 size: 0.026123 MiB name: Session_Pool 00:06:21.394 end mempools------- 00:06:21.395 6 memzones totaling size 4.142822 MiB 00:06:21.395 size: 1.000366 MiB name: RG_ring_0_2682880 00:06:21.395 size: 1.000366 MiB name: RG_ring_1_2682880 00:06:21.395 size: 1.000366 MiB name: RG_ring_4_2682880 00:06:21.395 size: 1.000366 MiB name: RG_ring_5_2682880 00:06:21.395 size: 0.125366 MiB name: RG_ring_2_2682880 00:06:21.395 size: 0.015991 MiB name: RG_ring_3_2682880 00:06:21.395 end memzones------- 00:06:21.395 14:40:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:21.395 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:06:21.395 list of free elements. size: 12.519348 MiB 00:06:21.395 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:21.395 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:21.395 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:21.395 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:21.395 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:21.395 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:21.395 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:21.395 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:21.395 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:21.395 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:06:21.395 element at address: 0x20000b200000 with size: 0.490723 MiB 00:06:21.395 element at address: 0x200000800000 with size: 0.487793 MiB 00:06:21.395 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:21.395 element at address: 0x200027e00000 with size: 0.410034 MiB 00:06:21.395 element at address: 0x200003a00000 with size: 0.355530 MiB 00:06:21.395 list of standard malloc elements. size: 199.218079 MiB 00:06:21.395 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:21.395 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:21.395 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:21.395 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:21.395 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:21.395 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:21.395 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:21.395 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:21.395 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:21.395 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:21.395 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:21.395 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:21.395 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:21.395 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:21.395 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:21.395 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:21.395 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:21.395 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:21.395 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:21.395 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:21.395 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:21.395 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:21.395 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:21.395 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:21.395 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:21.395 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:21.395 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:21.395 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:21.395 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:21.395 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:21.395 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:21.395 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:21.395 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:21.395 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:21.395 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:21.395 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:21.395 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:06:21.395 element at address: 0x200027e69040 with size: 0.000183 MiB 00:06:21.395 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:06:21.395 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:21.395 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:21.395 list of memzone associated elements. size: 602.262573 MiB 00:06:21.395 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:21.395 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:21.395 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:21.395 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:21.395 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:21.395 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_2682880_0 00:06:21.395 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:21.395 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2682880_0 00:06:21.395 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:21.395 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2682880_0 00:06:21.395 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:21.395 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:21.395 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:21.395 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:21.395 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:21.395 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2682880 00:06:21.395 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:21.395 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2682880 00:06:21.395 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:21.395 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2682880 00:06:21.395 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:21.395 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:21.395 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:21.395 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:21.395 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:21.395 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:21.395 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:21.395 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:21.395 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:21.395 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2682880 00:06:21.395 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:21.395 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2682880 00:06:21.395 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:21.395 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2682880 00:06:21.395 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:21.395 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2682880 00:06:21.395 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:21.395 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2682880 00:06:21.395 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:21.395 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:21.395 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:21.395 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:21.395 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:21.395 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:21.395 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:21.395 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2682880 00:06:21.395 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:21.395 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:21.395 element at address: 0x200027e69100 with size: 0.023743 MiB 00:06:21.395 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:21.395 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:21.395 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2682880 00:06:21.395 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:06:21.395 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:21.395 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:21.395 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2682880 00:06:21.395 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:21.395 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2682880 00:06:21.395 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:06:21.395 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:21.395 14:40:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:21.395 14:40:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2682880 00:06:21.395 14:40:55 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 2682880 ']' 00:06:21.395 14:40:55 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 2682880 00:06:21.395 14:40:55 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:06:21.395 14:40:55 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:21.395 14:40:55 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2682880 00:06:21.654 14:40:55 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:21.654 14:40:55 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:21.654 14:40:55 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2682880' 00:06:21.654 killing process with pid 2682880 00:06:21.654 14:40:55 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 2682880 00:06:21.654 14:40:55 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 2682880 00:06:21.912 00:06:21.912 real 0m1.369s 00:06:21.912 user 0m1.433s 00:06:21.912 sys 0m0.387s 00:06:21.912 14:40:55 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:21.912 14:40:55 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:21.912 ************************************ 00:06:21.912 END TEST dpdk_mem_utility 00:06:21.912 ************************************ 00:06:21.912 14:40:55 -- common/autotest_common.sh@1142 -- # return 0 00:06:21.912 14:40:55 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:06:21.912 14:40:55 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:21.912 14:40:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.912 14:40:55 -- common/autotest_common.sh@10 -- # set +x 00:06:21.912 ************************************ 00:06:21.912 START TEST event 00:06:21.912 ************************************ 00:06:21.912 14:40:55 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:06:21.912 * Looking for test storage... 00:06:21.912 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:06:21.912 14:40:55 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:21.912 14:40:55 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:21.912 14:40:55 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:21.912 14:40:55 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:21.912 14:40:55 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.912 14:40:55 event -- common/autotest_common.sh@10 -- # set +x 00:06:21.912 ************************************ 00:06:21.912 START TEST event_perf 00:06:21.912 ************************************ 00:06:21.912 14:40:55 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:21.912 Running I/O for 1 seconds...[2024-07-15 14:40:55.826204] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:06:21.912 [2024-07-15 14:40:55.826269] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2683171 ] 00:06:22.170 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.170 [2024-07-15 14:40:55.884933] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:22.170 [2024-07-15 14:40:55.961134] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:22.170 [2024-07-15 14:40:55.961229] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:22.170 [2024-07-15 14:40:55.961292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:22.170 [2024-07-15 14:40:55.961293] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.541 Running I/O for 1 seconds... 00:06:23.541 lcore 0: 214550 00:06:23.541 lcore 1: 214551 00:06:23.541 lcore 2: 214552 00:06:23.541 lcore 3: 214551 00:06:23.541 done. 00:06:23.541 00:06:23.541 real 0m1.226s 00:06:23.541 user 0m4.140s 00:06:23.541 sys 0m0.082s 00:06:23.541 14:40:57 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:23.541 14:40:57 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:23.541 ************************************ 00:06:23.541 END TEST event_perf 00:06:23.541 ************************************ 00:06:23.541 14:40:57 event -- common/autotest_common.sh@1142 -- # return 0 00:06:23.541 14:40:57 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:23.541 14:40:57 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:23.541 14:40:57 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.541 14:40:57 event -- common/autotest_common.sh@10 -- # set +x 00:06:23.541 ************************************ 00:06:23.541 START TEST event_reactor 00:06:23.541 ************************************ 00:06:23.541 14:40:57 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:23.541 [2024-07-15 14:40:57.120239] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:06:23.541 [2024-07-15 14:40:57.120308] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2683421 ] 00:06:23.541 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.541 [2024-07-15 14:40:57.180668] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.541 [2024-07-15 14:40:57.251611] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.474 test_start 00:06:24.474 oneshot 00:06:24.474 tick 100 00:06:24.474 tick 100 00:06:24.474 tick 250 00:06:24.474 tick 100 00:06:24.474 tick 100 00:06:24.474 tick 100 00:06:24.474 tick 250 00:06:24.474 tick 500 00:06:24.474 tick 100 00:06:24.474 tick 100 00:06:24.474 tick 250 00:06:24.474 tick 100 00:06:24.474 tick 100 00:06:24.474 test_end 00:06:24.474 00:06:24.474 real 0m1.222s 00:06:24.474 user 0m1.143s 00:06:24.474 sys 0m0.075s 00:06:24.474 14:40:58 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:24.474 14:40:58 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:24.474 ************************************ 00:06:24.474 END TEST event_reactor 00:06:24.474 ************************************ 00:06:24.474 14:40:58 event -- common/autotest_common.sh@1142 -- # return 0 00:06:24.474 14:40:58 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:24.474 14:40:58 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:24.475 14:40:58 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.475 14:40:58 event -- common/autotest_common.sh@10 -- # set +x 00:06:24.475 ************************************ 00:06:24.475 START TEST event_reactor_perf 00:06:24.475 ************************************ 00:06:24.475 14:40:58 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:24.733 [2024-07-15 14:40:58.408142] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:06:24.733 [2024-07-15 14:40:58.408218] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2683669 ] 00:06:24.733 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.733 [2024-07-15 14:40:58.468210] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.733 [2024-07-15 14:40:58.538518] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.106 test_start 00:06:26.106 test_end 00:06:26.106 Performance: 515079 events per second 00:06:26.106 00:06:26.106 real 0m1.223s 00:06:26.106 user 0m1.144s 00:06:26.106 sys 0m0.075s 00:06:26.106 14:40:59 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:26.106 14:40:59 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:26.106 ************************************ 00:06:26.106 END TEST event_reactor_perf 00:06:26.106 ************************************ 00:06:26.106 14:40:59 event -- common/autotest_common.sh@1142 -- # return 0 00:06:26.106 14:40:59 event -- event/event.sh@49 -- # uname -s 00:06:26.106 14:40:59 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:26.106 14:40:59 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:26.106 14:40:59 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:26.106 14:40:59 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.106 14:40:59 event -- common/autotest_common.sh@10 -- # set +x 00:06:26.106 ************************************ 00:06:26.106 START TEST event_scheduler 00:06:26.106 ************************************ 00:06:26.106 14:40:59 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:26.106 * Looking for test storage... 00:06:26.106 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler 00:06:26.106 14:40:59 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:26.106 14:40:59 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2683950 00:06:26.106 14:40:59 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:26.106 14:40:59 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:26.106 14:40:59 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2683950 00:06:26.106 14:40:59 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 2683950 ']' 00:06:26.106 14:40:59 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.107 14:40:59 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:26.107 14:40:59 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.107 14:40:59 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:26.107 14:40:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:26.107 [2024-07-15 14:40:59.808548] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:06:26.107 [2024-07-15 14:40:59.808587] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2683950 ] 00:06:26.107 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.107 [2024-07-15 14:40:59.856765] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:26.107 [2024-07-15 14:40:59.932947] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.107 [2024-07-15 14:40:59.933036] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:26.107 [2024-07-15 14:40:59.933121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:26.107 [2024-07-15 14:40:59.933123] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:27.039 14:41:00 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:27.039 14:41:00 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:06:27.039 14:41:00 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:27.039 14:41:00 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.039 14:41:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:27.039 [2024-07-15 14:41:00.619473] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:27.039 [2024-07-15 14:41:00.619495] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:06:27.039 [2024-07-15 14:41:00.619504] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:27.040 [2024-07-15 14:41:00.619509] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:27.040 [2024-07-15 14:41:00.619514] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:27.040 14:41:00 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.040 14:41:00 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:27.040 14:41:00 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.040 14:41:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:27.040 [2024-07-15 14:41:00.691483] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:27.040 14:41:00 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.040 14:41:00 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:27.040 14:41:00 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:27.040 14:41:00 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.040 14:41:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:27.040 ************************************ 00:06:27.040 START TEST scheduler_create_thread 00:06:27.040 ************************************ 00:06:27.040 14:41:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:06:27.040 14:41:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:27.040 14:41:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.040 14:41:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:27.040 2 00:06:27.040 14:41:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.040 14:41:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:27.040 14:41:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.040 14:41:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:27.040 3 00:06:27.040 14:41:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.040 14:41:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:27.040 14:41:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.040 14:41:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:27.040 4 00:06:27.040 14:41:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.040 14:41:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:27.040 14:41:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.040 14:41:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:27.040 5 00:06:27.040 14:41:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.040 14:41:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:27.040 14:41:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.040 14:41:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:27.040 6 00:06:27.040 14:41:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.040 14:41:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:27.040 14:41:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.040 14:41:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:27.040 7 00:06:27.040 14:41:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.040 14:41:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:27.040 14:41:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.040 14:41:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:27.040 8 00:06:27.040 14:41:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.040 14:41:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:27.040 14:41:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.040 14:41:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:27.040 9 00:06:27.040 14:41:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.040 14:41:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:27.040 14:41:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.040 14:41:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:27.040 10 00:06:27.040 14:41:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.040 14:41:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:27.040 14:41:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.040 14:41:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:27.040 14:41:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.040 14:41:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:27.040 14:41:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:27.040 14:41:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.040 14:41:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:27.040 14:41:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.040 14:41:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:27.040 14:41:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.040 14:41:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.414 14:41:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.414 14:41:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:28.414 14:41:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:28.414 14:41:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.414 14:41:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:29.813 14:41:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:29.813 00:06:29.813 real 0m2.619s 00:06:29.813 user 0m0.022s 00:06:29.813 sys 0m0.007s 00:06:29.813 14:41:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:29.813 14:41:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:29.813 ************************************ 00:06:29.813 END TEST scheduler_create_thread 00:06:29.813 ************************************ 00:06:29.813 14:41:03 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:06:29.813 14:41:03 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:29.813 14:41:03 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2683950 00:06:29.813 14:41:03 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 2683950 ']' 00:06:29.813 14:41:03 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 2683950 00:06:29.813 14:41:03 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:06:29.813 14:41:03 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:29.813 14:41:03 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2683950 00:06:29.813 14:41:03 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:29.813 14:41:03 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:29.813 14:41:03 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2683950' 00:06:29.813 killing process with pid 2683950 00:06:29.813 14:41:03 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 2683950 00:06:29.813 14:41:03 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 2683950 00:06:30.070 [2024-07-15 14:41:03.825550] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:30.328 00:06:30.328 real 0m4.335s 00:06:30.328 user 0m8.247s 00:06:30.328 sys 0m0.346s 00:06:30.328 14:41:04 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:30.328 14:41:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:30.328 ************************************ 00:06:30.328 END TEST event_scheduler 00:06:30.328 ************************************ 00:06:30.328 14:41:04 event -- common/autotest_common.sh@1142 -- # return 0 00:06:30.328 14:41:04 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:30.328 14:41:04 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:30.328 14:41:04 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:30.328 14:41:04 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.328 14:41:04 event -- common/autotest_common.sh@10 -- # set +x 00:06:30.328 ************************************ 00:06:30.328 START TEST app_repeat 00:06:30.328 ************************************ 00:06:30.328 14:41:04 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:06:30.328 14:41:04 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.328 14:41:04 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.328 14:41:04 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:30.328 14:41:04 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:30.328 14:41:04 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:30.328 14:41:04 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:30.328 14:41:04 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:30.328 14:41:04 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2684696 00:06:30.328 14:41:04 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:30.328 14:41:04 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2684696' 00:06:30.328 Process app_repeat pid: 2684696 00:06:30.328 14:41:04 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:30.328 14:41:04 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:30.328 spdk_app_start Round 0 00:06:30.328 14:41:04 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2684696 /var/tmp/spdk-nbd.sock 00:06:30.328 14:41:04 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:30.328 14:41:04 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2684696 ']' 00:06:30.328 14:41:04 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:30.328 14:41:04 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:30.328 14:41:04 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:30.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:30.328 14:41:04 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:30.328 14:41:04 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:30.328 [2024-07-15 14:41:04.117495] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:06:30.328 [2024-07-15 14:41:04.117547] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2684696 ] 00:06:30.328 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.328 [2024-07-15 14:41:04.172876] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:30.585 [2024-07-15 14:41:04.253324] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:30.585 [2024-07-15 14:41:04.253327] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.148 14:41:04 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:31.148 14:41:04 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:31.148 14:41:04 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:31.405 Malloc0 00:06:31.405 14:41:05 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:31.405 Malloc1 00:06:31.405 14:41:05 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:31.405 14:41:05 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.405 14:41:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:31.405 14:41:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:31.405 14:41:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.406 14:41:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:31.406 14:41:05 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:31.406 14:41:05 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.406 14:41:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:31.406 14:41:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:31.406 14:41:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.406 14:41:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:31.406 14:41:05 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:31.406 14:41:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:31.406 14:41:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:31.406 14:41:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:31.663 /dev/nbd0 00:06:31.663 14:41:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:31.663 14:41:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:31.663 14:41:05 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:31.663 14:41:05 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:31.663 14:41:05 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:31.663 14:41:05 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:31.663 14:41:05 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:31.663 14:41:05 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:31.663 14:41:05 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:31.663 14:41:05 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:31.663 14:41:05 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:31.663 1+0 records in 00:06:31.663 1+0 records out 00:06:31.663 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000199298 s, 20.6 MB/s 00:06:31.663 14:41:05 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:31.663 14:41:05 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:31.663 14:41:05 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:31.663 14:41:05 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:31.663 14:41:05 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:31.663 14:41:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:31.663 14:41:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:31.663 14:41:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:31.921 /dev/nbd1 00:06:31.921 14:41:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:31.921 14:41:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:31.921 14:41:05 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:31.921 14:41:05 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:31.921 14:41:05 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:31.921 14:41:05 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:31.921 14:41:05 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:31.921 14:41:05 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:31.921 14:41:05 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:31.921 14:41:05 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:31.921 14:41:05 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:31.921 1+0 records in 00:06:31.921 1+0 records out 00:06:31.921 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000224395 s, 18.3 MB/s 00:06:31.921 14:41:05 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:31.921 14:41:05 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:31.921 14:41:05 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:31.921 14:41:05 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:31.921 14:41:05 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:31.921 14:41:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:31.921 14:41:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:31.921 14:41:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:31.921 14:41:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.921 14:41:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:31.921 14:41:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:31.921 { 00:06:31.921 "nbd_device": "/dev/nbd0", 00:06:31.921 "bdev_name": "Malloc0" 00:06:31.921 }, 00:06:31.921 { 00:06:31.921 "nbd_device": "/dev/nbd1", 00:06:31.921 "bdev_name": "Malloc1" 00:06:31.921 } 00:06:31.921 ]' 00:06:32.179 14:41:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:32.179 { 00:06:32.179 "nbd_device": "/dev/nbd0", 00:06:32.179 "bdev_name": "Malloc0" 00:06:32.179 }, 00:06:32.179 { 00:06:32.179 "nbd_device": "/dev/nbd1", 00:06:32.179 "bdev_name": "Malloc1" 00:06:32.179 } 00:06:32.179 ]' 00:06:32.179 14:41:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:32.179 14:41:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:32.179 /dev/nbd1' 00:06:32.179 14:41:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:32.179 /dev/nbd1' 00:06:32.179 14:41:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:32.179 14:41:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:32.179 14:41:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:32.179 14:41:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:32.179 14:41:05 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:32.179 14:41:05 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:32.179 14:41:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.179 14:41:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:32.179 14:41:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:32.179 14:41:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:32.179 14:41:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:32.179 14:41:05 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:32.179 256+0 records in 00:06:32.179 256+0 records out 00:06:32.179 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103591 s, 101 MB/s 00:06:32.179 14:41:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:32.179 14:41:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:32.179 256+0 records in 00:06:32.179 256+0 records out 00:06:32.179 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0141147 s, 74.3 MB/s 00:06:32.179 14:41:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:32.179 14:41:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:32.179 256+0 records in 00:06:32.179 256+0 records out 00:06:32.179 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0141656 s, 74.0 MB/s 00:06:32.179 14:41:05 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:32.179 14:41:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.179 14:41:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:32.179 14:41:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:32.179 14:41:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:32.179 14:41:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:32.179 14:41:05 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:32.179 14:41:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:32.179 14:41:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:32.179 14:41:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:32.179 14:41:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:32.179 14:41:05 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:32.179 14:41:05 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:32.179 14:41:05 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.179 14:41:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.179 14:41:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:32.179 14:41:05 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:32.179 14:41:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:32.179 14:41:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:32.436 14:41:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:32.436 14:41:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:32.436 14:41:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:32.436 14:41:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:32.436 14:41:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:32.436 14:41:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:32.436 14:41:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:32.436 14:41:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:32.436 14:41:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:32.436 14:41:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:32.436 14:41:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:32.436 14:41:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:32.436 14:41:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:32.436 14:41:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:32.436 14:41:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:32.436 14:41:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:32.436 14:41:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:32.436 14:41:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:32.436 14:41:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:32.436 14:41:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.436 14:41:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:32.693 14:41:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:32.693 14:41:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:32.693 14:41:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:32.693 14:41:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:32.693 14:41:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:32.693 14:41:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:32.693 14:41:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:32.693 14:41:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:32.693 14:41:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:32.693 14:41:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:32.693 14:41:06 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:32.693 14:41:06 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:32.693 14:41:06 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:32.951 14:41:06 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:33.209 [2024-07-15 14:41:06.939321] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:33.209 [2024-07-15 14:41:07.006449] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:33.209 [2024-07-15 14:41:07.006452] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.209 [2024-07-15 14:41:07.047059] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:33.209 [2024-07-15 14:41:07.047097] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:36.485 14:41:09 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:36.485 14:41:09 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:36.485 spdk_app_start Round 1 00:06:36.485 14:41:09 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2684696 /var/tmp/spdk-nbd.sock 00:06:36.485 14:41:09 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2684696 ']' 00:06:36.485 14:41:09 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:36.485 14:41:09 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:36.485 14:41:09 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:36.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:36.485 14:41:09 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:36.485 14:41:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:36.485 14:41:09 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:36.485 14:41:09 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:36.485 14:41:09 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:36.485 Malloc0 00:06:36.485 14:41:10 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:36.485 Malloc1 00:06:36.485 14:41:10 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:36.485 14:41:10 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.485 14:41:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:36.485 14:41:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:36.485 14:41:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.485 14:41:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:36.485 14:41:10 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:36.485 14:41:10 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.485 14:41:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:36.485 14:41:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:36.485 14:41:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.485 14:41:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:36.485 14:41:10 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:36.485 14:41:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:36.485 14:41:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:36.485 14:41:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:36.743 /dev/nbd0 00:06:36.743 14:41:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:36.743 14:41:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:36.743 14:41:10 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:36.743 14:41:10 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:36.743 14:41:10 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:36.743 14:41:10 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:36.743 14:41:10 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:36.743 14:41:10 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:36.743 14:41:10 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:36.743 14:41:10 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:36.743 14:41:10 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:36.743 1+0 records in 00:06:36.743 1+0 records out 00:06:36.743 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000178922 s, 22.9 MB/s 00:06:36.743 14:41:10 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:36.743 14:41:10 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:36.743 14:41:10 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:36.743 14:41:10 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:36.743 14:41:10 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:36.743 14:41:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:36.743 14:41:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:36.743 14:41:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:37.000 /dev/nbd1 00:06:37.001 14:41:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:37.001 14:41:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:37.001 14:41:10 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:37.001 14:41:10 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:37.001 14:41:10 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:37.001 14:41:10 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:37.001 14:41:10 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:37.001 14:41:10 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:37.001 14:41:10 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:37.001 14:41:10 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:37.001 14:41:10 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:37.001 1+0 records in 00:06:37.001 1+0 records out 00:06:37.001 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000201405 s, 20.3 MB/s 00:06:37.001 14:41:10 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:37.001 14:41:10 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:37.001 14:41:10 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:37.001 14:41:10 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:37.001 14:41:10 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:37.001 14:41:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:37.001 14:41:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:37.001 14:41:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:37.001 14:41:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.001 14:41:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:37.001 14:41:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:37.001 { 00:06:37.001 "nbd_device": "/dev/nbd0", 00:06:37.001 "bdev_name": "Malloc0" 00:06:37.001 }, 00:06:37.001 { 00:06:37.001 "nbd_device": "/dev/nbd1", 00:06:37.001 "bdev_name": "Malloc1" 00:06:37.001 } 00:06:37.001 ]' 00:06:37.001 14:41:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:37.001 { 00:06:37.001 "nbd_device": "/dev/nbd0", 00:06:37.001 "bdev_name": "Malloc0" 00:06:37.001 }, 00:06:37.001 { 00:06:37.001 "nbd_device": "/dev/nbd1", 00:06:37.001 "bdev_name": "Malloc1" 00:06:37.001 } 00:06:37.001 ]' 00:06:37.001 14:41:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:37.259 14:41:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:37.259 /dev/nbd1' 00:06:37.259 14:41:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:37.259 /dev/nbd1' 00:06:37.259 14:41:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:37.259 14:41:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:37.259 14:41:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:37.259 14:41:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:37.259 14:41:10 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:37.259 14:41:10 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:37.259 14:41:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:37.259 14:41:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:37.259 14:41:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:37.259 14:41:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:37.259 14:41:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:37.259 14:41:10 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:37.259 256+0 records in 00:06:37.259 256+0 records out 00:06:37.259 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0102764 s, 102 MB/s 00:06:37.259 14:41:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:37.259 14:41:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:37.259 256+0 records in 00:06:37.259 256+0 records out 00:06:37.259 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0137327 s, 76.4 MB/s 00:06:37.259 14:41:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:37.259 14:41:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:37.259 256+0 records in 00:06:37.259 256+0 records out 00:06:37.259 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0144276 s, 72.7 MB/s 00:06:37.259 14:41:11 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:37.259 14:41:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:37.259 14:41:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:37.259 14:41:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:37.259 14:41:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:37.259 14:41:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:37.259 14:41:11 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:37.259 14:41:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:37.259 14:41:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:37.259 14:41:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:37.259 14:41:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:37.259 14:41:11 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:37.259 14:41:11 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:37.259 14:41:11 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.259 14:41:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:37.259 14:41:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:37.259 14:41:11 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:37.259 14:41:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:37.259 14:41:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:37.517 14:41:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:37.517 14:41:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:37.517 14:41:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:37.517 14:41:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:37.517 14:41:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:37.517 14:41:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:37.517 14:41:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:37.517 14:41:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:37.517 14:41:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:37.517 14:41:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:37.517 14:41:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:37.517 14:41:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:37.517 14:41:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:37.517 14:41:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:37.517 14:41:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:37.517 14:41:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:37.517 14:41:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:37.517 14:41:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:37.517 14:41:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:37.517 14:41:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.517 14:41:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:37.774 14:41:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:37.774 14:41:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:37.774 14:41:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:37.774 14:41:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:37.774 14:41:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:37.774 14:41:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:37.774 14:41:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:37.774 14:41:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:37.774 14:41:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:37.774 14:41:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:37.774 14:41:11 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:37.774 14:41:11 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:37.774 14:41:11 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:38.038 14:41:11 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:38.295 [2024-07-15 14:41:12.017280] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:38.295 [2024-07-15 14:41:12.084512] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:38.295 [2024-07-15 14:41:12.084514] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.295 [2024-07-15 14:41:12.125805] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:38.295 [2024-07-15 14:41:12.125845] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:41.577 14:41:14 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:41.577 14:41:14 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:41.577 spdk_app_start Round 2 00:06:41.577 14:41:14 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2684696 /var/tmp/spdk-nbd.sock 00:06:41.578 14:41:14 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2684696 ']' 00:06:41.578 14:41:14 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:41.578 14:41:14 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:41.578 14:41:14 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:41.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:41.578 14:41:14 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:41.578 14:41:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:41.578 14:41:15 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:41.578 14:41:15 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:41.578 14:41:15 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:41.578 Malloc0 00:06:41.578 14:41:15 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:41.578 Malloc1 00:06:41.578 14:41:15 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:41.578 14:41:15 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.578 14:41:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:41.578 14:41:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:41.578 14:41:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:41.578 14:41:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:41.578 14:41:15 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:41.578 14:41:15 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.578 14:41:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:41.578 14:41:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:41.578 14:41:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:41.578 14:41:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:41.578 14:41:15 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:41.578 14:41:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:41.578 14:41:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:41.578 14:41:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:41.837 /dev/nbd0 00:06:41.837 14:41:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:41.837 14:41:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:41.837 14:41:15 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:41.837 14:41:15 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:41.837 14:41:15 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:41.837 14:41:15 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:41.837 14:41:15 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:41.837 14:41:15 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:41.837 14:41:15 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:41.837 14:41:15 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:41.837 14:41:15 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:41.837 1+0 records in 00:06:41.837 1+0 records out 00:06:41.837 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000201237 s, 20.4 MB/s 00:06:41.837 14:41:15 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:41.837 14:41:15 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:41.837 14:41:15 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:41.837 14:41:15 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:41.837 14:41:15 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:41.837 14:41:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:41.837 14:41:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:41.837 14:41:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:41.837 /dev/nbd1 00:06:41.837 14:41:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:41.837 14:41:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:41.837 14:41:15 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:41.837 14:41:15 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:41.837 14:41:15 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:41.837 14:41:15 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:41.837 14:41:15 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:41.837 14:41:15 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:41.837 14:41:15 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:41.837 14:41:15 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:41.837 14:41:15 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:41.837 1+0 records in 00:06:41.837 1+0 records out 00:06:41.837 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000192062 s, 21.3 MB/s 00:06:41.837 14:41:15 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:41.837 14:41:15 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:41.837 14:41:15 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:41.837 14:41:15 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:41.837 14:41:15 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:41.837 14:41:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:41.837 14:41:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:42.095 14:41:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:42.095 14:41:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:42.095 14:41:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:42.095 14:41:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:42.095 { 00:06:42.095 "nbd_device": "/dev/nbd0", 00:06:42.095 "bdev_name": "Malloc0" 00:06:42.095 }, 00:06:42.095 { 00:06:42.095 "nbd_device": "/dev/nbd1", 00:06:42.095 "bdev_name": "Malloc1" 00:06:42.095 } 00:06:42.095 ]' 00:06:42.095 14:41:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:42.095 { 00:06:42.095 "nbd_device": "/dev/nbd0", 00:06:42.095 "bdev_name": "Malloc0" 00:06:42.095 }, 00:06:42.095 { 00:06:42.095 "nbd_device": "/dev/nbd1", 00:06:42.095 "bdev_name": "Malloc1" 00:06:42.095 } 00:06:42.095 ]' 00:06:42.095 14:41:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:42.095 14:41:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:42.095 /dev/nbd1' 00:06:42.095 14:41:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:42.095 /dev/nbd1' 00:06:42.095 14:41:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:42.095 14:41:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:42.095 14:41:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:42.095 14:41:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:42.095 14:41:15 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:42.095 14:41:15 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:42.095 14:41:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:42.095 14:41:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:42.095 14:41:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:42.095 14:41:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:42.095 14:41:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:42.095 14:41:15 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:42.095 256+0 records in 00:06:42.095 256+0 records out 00:06:42.095 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103171 s, 102 MB/s 00:06:42.095 14:41:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:42.095 14:41:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:42.353 256+0 records in 00:06:42.353 256+0 records out 00:06:42.353 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0143632 s, 73.0 MB/s 00:06:42.353 14:41:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:42.353 14:41:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:42.353 256+0 records in 00:06:42.353 256+0 records out 00:06:42.353 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0143693 s, 73.0 MB/s 00:06:42.353 14:41:16 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:42.353 14:41:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:42.353 14:41:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:42.353 14:41:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:42.353 14:41:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:42.353 14:41:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:42.353 14:41:16 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:42.353 14:41:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:42.353 14:41:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:42.353 14:41:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:42.353 14:41:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:42.353 14:41:16 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:42.353 14:41:16 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:42.353 14:41:16 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:42.353 14:41:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:42.353 14:41:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:42.353 14:41:16 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:42.353 14:41:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:42.353 14:41:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:42.353 14:41:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:42.353 14:41:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:42.353 14:41:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:42.353 14:41:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:42.353 14:41:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:42.353 14:41:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:42.353 14:41:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:42.353 14:41:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:42.353 14:41:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:42.353 14:41:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:42.610 14:41:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:42.610 14:41:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:42.610 14:41:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:42.610 14:41:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:42.610 14:41:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:42.610 14:41:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:42.610 14:41:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:42.610 14:41:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:42.610 14:41:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:42.610 14:41:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:42.610 14:41:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:42.940 14:41:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:42.940 14:41:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:42.940 14:41:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:42.940 14:41:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:42.940 14:41:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:42.940 14:41:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:42.940 14:41:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:42.940 14:41:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:42.940 14:41:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:42.940 14:41:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:42.940 14:41:16 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:42.940 14:41:16 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:42.940 14:41:16 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:43.287 14:41:16 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:43.287 [2024-07-15 14:41:17.022437] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:43.287 [2024-07-15 14:41:17.094291] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:43.287 [2024-07-15 14:41:17.094295] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.287 [2024-07-15 14:41:17.135194] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:43.287 [2024-07-15 14:41:17.135235] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:46.569 14:41:19 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2684696 /var/tmp/spdk-nbd.sock 00:06:46.569 14:41:19 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2684696 ']' 00:06:46.569 14:41:19 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:46.569 14:41:19 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:46.569 14:41:19 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:46.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:46.569 14:41:19 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:46.569 14:41:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:46.569 14:41:20 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:46.569 14:41:20 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:46.569 14:41:20 event.app_repeat -- event/event.sh@39 -- # killprocess 2684696 00:06:46.569 14:41:20 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 2684696 ']' 00:06:46.569 14:41:20 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 2684696 00:06:46.569 14:41:20 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:06:46.569 14:41:20 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:46.569 14:41:20 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2684696 00:06:46.569 14:41:20 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:46.569 14:41:20 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:46.569 14:41:20 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2684696' 00:06:46.569 killing process with pid 2684696 00:06:46.569 14:41:20 event.app_repeat -- common/autotest_common.sh@967 -- # kill 2684696 00:06:46.569 14:41:20 event.app_repeat -- common/autotest_common.sh@972 -- # wait 2684696 00:06:46.569 spdk_app_start is called in Round 0. 00:06:46.569 Shutdown signal received, stop current app iteration 00:06:46.569 Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 reinitialization... 00:06:46.569 spdk_app_start is called in Round 1. 00:06:46.569 Shutdown signal received, stop current app iteration 00:06:46.569 Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 reinitialization... 00:06:46.569 spdk_app_start is called in Round 2. 00:06:46.569 Shutdown signal received, stop current app iteration 00:06:46.569 Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 reinitialization... 00:06:46.569 spdk_app_start is called in Round 3. 00:06:46.569 Shutdown signal received, stop current app iteration 00:06:46.569 14:41:20 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:46.569 14:41:20 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:46.569 00:06:46.569 real 0m16.146s 00:06:46.569 user 0m34.980s 00:06:46.569 sys 0m2.284s 00:06:46.569 14:41:20 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:46.569 14:41:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:46.569 ************************************ 00:06:46.569 END TEST app_repeat 00:06:46.569 ************************************ 00:06:46.569 14:41:20 event -- common/autotest_common.sh@1142 -- # return 0 00:06:46.569 14:41:20 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:46.569 14:41:20 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:46.569 14:41:20 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:46.569 14:41:20 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.569 14:41:20 event -- common/autotest_common.sh@10 -- # set +x 00:06:46.569 ************************************ 00:06:46.569 START TEST cpu_locks 00:06:46.569 ************************************ 00:06:46.569 14:41:20 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:46.569 * Looking for test storage... 00:06:46.569 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:06:46.569 14:41:20 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:46.569 14:41:20 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:46.569 14:41:20 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:46.569 14:41:20 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:46.569 14:41:20 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:46.569 14:41:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.569 14:41:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:46.569 ************************************ 00:06:46.569 START TEST default_locks 00:06:46.569 ************************************ 00:06:46.569 14:41:20 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:06:46.569 14:41:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2687684 00:06:46.569 14:41:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2687684 00:06:46.569 14:41:20 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 2687684 ']' 00:06:46.569 14:41:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:46.569 14:41:20 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.569 14:41:20 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:46.569 14:41:20 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.569 14:41:20 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:46.569 14:41:20 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:46.569 [2024-07-15 14:41:20.457867] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:06:46.569 [2024-07-15 14:41:20.457905] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2687684 ] 00:06:46.569 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.827 [2024-07-15 14:41:20.512945] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.827 [2024-07-15 14:41:20.592469] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.394 14:41:21 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:47.394 14:41:21 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:06:47.394 14:41:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2687684 00:06:47.394 14:41:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2687684 00:06:47.394 14:41:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:47.653 lslocks: write error 00:06:47.653 14:41:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2687684 00:06:47.653 14:41:21 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 2687684 ']' 00:06:47.653 14:41:21 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 2687684 00:06:47.653 14:41:21 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:06:47.653 14:41:21 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:47.653 14:41:21 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2687684 00:06:47.653 14:41:21 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:47.653 14:41:21 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:47.653 14:41:21 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2687684' 00:06:47.653 killing process with pid 2687684 00:06:47.653 14:41:21 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 2687684 00:06:47.653 14:41:21 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 2687684 00:06:47.912 14:41:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2687684 00:06:47.913 14:41:21 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:47.913 14:41:21 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2687684 00:06:47.913 14:41:21 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:47.913 14:41:21 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:47.913 14:41:21 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:47.913 14:41:21 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:47.913 14:41:21 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 2687684 00:06:47.913 14:41:21 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 2687684 ']' 00:06:47.913 14:41:21 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.913 14:41:21 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:47.913 14:41:21 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.913 14:41:21 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:47.913 14:41:21 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:47.913 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (2687684) - No such process 00:06:47.913 ERROR: process (pid: 2687684) is no longer running 00:06:47.913 14:41:21 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:47.913 14:41:21 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:06:47.913 14:41:21 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:47.913 14:41:21 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:47.913 14:41:21 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:47.913 14:41:21 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:47.913 14:41:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:47.913 14:41:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:47.913 14:41:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:47.913 14:41:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:47.913 00:06:47.913 real 0m1.319s 00:06:47.913 user 0m1.395s 00:06:47.913 sys 0m0.385s 00:06:47.913 14:41:21 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:47.913 14:41:21 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:47.913 ************************************ 00:06:47.913 END TEST default_locks 00:06:47.913 ************************************ 00:06:47.913 14:41:21 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:47.913 14:41:21 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:47.913 14:41:21 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:47.913 14:41:21 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.913 14:41:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:47.913 ************************************ 00:06:47.913 START TEST default_locks_via_rpc 00:06:47.913 ************************************ 00:06:47.913 14:41:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:06:47.913 14:41:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2687944 00:06:47.913 14:41:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2687944 00:06:47.913 14:41:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:47.913 14:41:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2687944 ']' 00:06:47.913 14:41:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.913 14:41:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:47.913 14:41:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.913 14:41:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:47.913 14:41:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.172 [2024-07-15 14:41:21.859276] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:06:48.172 [2024-07-15 14:41:21.859317] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2687944 ] 00:06:48.172 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.172 [2024-07-15 14:41:21.913430] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.172 [2024-07-15 14:41:21.983715] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.739 14:41:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:48.739 14:41:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:48.739 14:41:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:48.739 14:41:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.739 14:41:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.739 14:41:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.739 14:41:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:48.997 14:41:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:48.997 14:41:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:48.997 14:41:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:48.997 14:41:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:48.997 14:41:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.997 14:41:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.997 14:41:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.997 14:41:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2687944 00:06:48.997 14:41:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2687944 00:06:48.997 14:41:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:48.997 14:41:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2687944 00:06:48.997 14:41:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 2687944 ']' 00:06:48.997 14:41:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 2687944 00:06:48.997 14:41:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:06:48.997 14:41:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:48.997 14:41:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2687944 00:06:48.997 14:41:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:48.997 14:41:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:48.997 14:41:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2687944' 00:06:48.997 killing process with pid 2687944 00:06:48.997 14:41:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 2687944 00:06:48.997 14:41:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 2687944 00:06:49.255 00:06:49.255 real 0m1.319s 00:06:49.255 user 0m1.372s 00:06:49.255 sys 0m0.408s 00:06:49.255 14:41:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:49.255 14:41:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.255 ************************************ 00:06:49.255 END TEST default_locks_via_rpc 00:06:49.255 ************************************ 00:06:49.255 14:41:23 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:49.255 14:41:23 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:49.255 14:41:23 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:49.255 14:41:23 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.255 14:41:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:49.518 ************************************ 00:06:49.518 START TEST non_locking_app_on_locked_coremask 00:06:49.518 ************************************ 00:06:49.518 14:41:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:06:49.518 14:41:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2688206 00:06:49.518 14:41:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2688206 /var/tmp/spdk.sock 00:06:49.518 14:41:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2688206 ']' 00:06:49.518 14:41:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.518 14:41:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:49.518 14:41:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.518 14:41:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:49.518 14:41:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:49.519 14:41:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:49.519 [2024-07-15 14:41:23.235178] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:06:49.519 [2024-07-15 14:41:23.235217] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2688206 ] 00:06:49.519 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.519 [2024-07-15 14:41:23.289201] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.519 [2024-07-15 14:41:23.368479] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.450 14:41:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:50.450 14:41:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:50.450 14:41:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2688348 00:06:50.450 14:41:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2688348 /var/tmp/spdk2.sock 00:06:50.450 14:41:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2688348 ']' 00:06:50.450 14:41:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:50.450 14:41:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:50.450 14:41:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:50.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:50.450 14:41:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:50.450 14:41:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:50.450 14:41:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:50.450 [2024-07-15 14:41:24.065052] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:06:50.450 [2024-07-15 14:41:24.065099] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2688348 ] 00:06:50.450 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.450 [2024-07-15 14:41:24.139675] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:50.450 [2024-07-15 14:41:24.139700] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.450 [2024-07-15 14:41:24.291999] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.017 14:41:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:51.017 14:41:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:51.017 14:41:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2688206 00:06:51.017 14:41:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2688206 00:06:51.017 14:41:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:51.275 lslocks: write error 00:06:51.275 14:41:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2688206 00:06:51.275 14:41:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2688206 ']' 00:06:51.275 14:41:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 2688206 00:06:51.275 14:41:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:51.275 14:41:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:51.275 14:41:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2688206 00:06:51.275 14:41:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:51.275 14:41:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:51.275 14:41:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2688206' 00:06:51.275 killing process with pid 2688206 00:06:51.275 14:41:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 2688206 00:06:51.275 14:41:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 2688206 00:06:51.839 14:41:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2688348 00:06:51.839 14:41:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2688348 ']' 00:06:51.840 14:41:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 2688348 00:06:51.840 14:41:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:51.840 14:41:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:51.840 14:41:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2688348 00:06:52.097 14:41:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:52.097 14:41:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:52.097 14:41:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2688348' 00:06:52.097 killing process with pid 2688348 00:06:52.097 14:41:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 2688348 00:06:52.097 14:41:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 2688348 00:06:52.355 00:06:52.355 real 0m2.913s 00:06:52.355 user 0m3.132s 00:06:52.355 sys 0m0.767s 00:06:52.355 14:41:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:52.355 14:41:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:52.355 ************************************ 00:06:52.355 END TEST non_locking_app_on_locked_coremask 00:06:52.355 ************************************ 00:06:52.355 14:41:26 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:52.355 14:41:26 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:52.355 14:41:26 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:52.355 14:41:26 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.355 14:41:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:52.355 ************************************ 00:06:52.355 START TEST locking_app_on_unlocked_coremask 00:06:52.355 ************************************ 00:06:52.355 14:41:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:06:52.355 14:41:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2688712 00:06:52.355 14:41:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2688712 /var/tmp/spdk.sock 00:06:52.355 14:41:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2688712 ']' 00:06:52.355 14:41:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.355 14:41:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:52.355 14:41:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.355 14:41:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:52.355 14:41:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:52.355 14:41:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:52.355 [2024-07-15 14:41:26.207969] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:06:52.355 [2024-07-15 14:41:26.208007] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2688712 ] 00:06:52.355 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.355 [2024-07-15 14:41:26.261672] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:52.355 [2024-07-15 14:41:26.261695] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.612 [2024-07-15 14:41:26.341707] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.178 14:41:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:53.178 14:41:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:53.178 14:41:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2688932 00:06:53.178 14:41:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2688932 /var/tmp/spdk2.sock 00:06:53.178 14:41:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:53.178 14:41:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2688932 ']' 00:06:53.178 14:41:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:53.178 14:41:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:53.178 14:41:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:53.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:53.178 14:41:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:53.178 14:41:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:53.178 [2024-07-15 14:41:27.025027] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:06:53.178 [2024-07-15 14:41:27.025072] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2688932 ] 00:06:53.178 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.178 [2024-07-15 14:41:27.093805] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.436 [2024-07-15 14:41:27.243067] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.001 14:41:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:54.001 14:41:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:54.001 14:41:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2688932 00:06:54.001 14:41:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:54.001 14:41:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2688932 00:06:54.566 lslocks: write error 00:06:54.566 14:41:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2688712 00:06:54.566 14:41:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2688712 ']' 00:06:54.566 14:41:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 2688712 00:06:54.566 14:41:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:54.566 14:41:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:54.566 14:41:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2688712 00:06:54.566 14:41:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:54.566 14:41:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:54.566 14:41:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2688712' 00:06:54.566 killing process with pid 2688712 00:06:54.566 14:41:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 2688712 00:06:54.566 14:41:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 2688712 00:06:55.132 14:41:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2688932 00:06:55.132 14:41:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2688932 ']' 00:06:55.132 14:41:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 2688932 00:06:55.132 14:41:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:55.132 14:41:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:55.132 14:41:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2688932 00:06:55.132 14:41:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:55.132 14:41:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:55.132 14:41:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2688932' 00:06:55.132 killing process with pid 2688932 00:06:55.132 14:41:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 2688932 00:06:55.132 14:41:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 2688932 00:06:55.391 00:06:55.391 real 0m3.111s 00:06:55.391 user 0m3.338s 00:06:55.391 sys 0m0.860s 00:06:55.391 14:41:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:55.391 14:41:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:55.391 ************************************ 00:06:55.391 END TEST locking_app_on_unlocked_coremask 00:06:55.391 ************************************ 00:06:55.391 14:41:29 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:55.391 14:41:29 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:55.391 14:41:29 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:55.391 14:41:29 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.391 14:41:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:55.649 ************************************ 00:06:55.649 START TEST locking_app_on_locked_coremask 00:06:55.649 ************************************ 00:06:55.649 14:41:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:06:55.649 14:41:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2689233 00:06:55.649 14:41:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2689233 /var/tmp/spdk.sock 00:06:55.649 14:41:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2689233 ']' 00:06:55.649 14:41:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.649 14:41:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:55.649 14:41:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.649 14:41:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:55.649 14:41:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:55.649 14:41:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:55.649 [2024-07-15 14:41:29.381432] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:06:55.649 [2024-07-15 14:41:29.381473] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2689233 ] 00:06:55.649 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.649 [2024-07-15 14:41:29.435025] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.649 [2024-07-15 14:41:29.514749] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.583 14:41:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:56.583 14:41:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:56.583 14:41:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2689445 00:06:56.583 14:41:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2689445 /var/tmp/spdk2.sock 00:06:56.583 14:41:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:56.583 14:41:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:56.583 14:41:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2689445 /var/tmp/spdk2.sock 00:06:56.583 14:41:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:56.583 14:41:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:56.583 14:41:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:56.583 14:41:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:56.583 14:41:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 2689445 /var/tmp/spdk2.sock 00:06:56.583 14:41:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2689445 ']' 00:06:56.583 14:41:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:56.583 14:41:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:56.583 14:41:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:56.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:56.583 14:41:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:56.583 14:41:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:56.583 [2024-07-15 14:41:30.228808] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:06:56.583 [2024-07-15 14:41:30.228855] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2689445 ] 00:06:56.583 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.583 [2024-07-15 14:41:30.297524] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2689233 has claimed it. 00:06:56.583 [2024-07-15 14:41:30.297561] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:57.148 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (2689445) - No such process 00:06:57.148 ERROR: process (pid: 2689445) is no longer running 00:06:57.148 14:41:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:57.148 14:41:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:57.148 14:41:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:57.148 14:41:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:57.148 14:41:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:57.148 14:41:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:57.148 14:41:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2689233 00:06:57.148 14:41:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2689233 00:06:57.148 14:41:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:57.406 lslocks: write error 00:06:57.406 14:41:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2689233 00:06:57.406 14:41:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2689233 ']' 00:06:57.406 14:41:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 2689233 00:06:57.406 14:41:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:57.406 14:41:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:57.406 14:41:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2689233 00:06:57.406 14:41:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:57.406 14:41:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:57.406 14:41:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2689233' 00:06:57.406 killing process with pid 2689233 00:06:57.406 14:41:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 2689233 00:06:57.406 14:41:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 2689233 00:06:57.665 00:06:57.665 real 0m2.116s 00:06:57.665 user 0m2.345s 00:06:57.665 sys 0m0.531s 00:06:57.665 14:41:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:57.665 14:41:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:57.665 ************************************ 00:06:57.665 END TEST locking_app_on_locked_coremask 00:06:57.665 ************************************ 00:06:57.665 14:41:31 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:57.665 14:41:31 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:57.665 14:41:31 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:57.665 14:41:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.665 14:41:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:57.665 ************************************ 00:06:57.665 START TEST locking_overlapped_coremask 00:06:57.665 ************************************ 00:06:57.665 14:41:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:06:57.665 14:41:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2689703 00:06:57.665 14:41:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2689703 /var/tmp/spdk.sock 00:06:57.665 14:41:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:57.665 14:41:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 2689703 ']' 00:06:57.665 14:41:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.665 14:41:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:57.665 14:41:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.665 14:41:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:57.665 14:41:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:57.665 [2024-07-15 14:41:31.567168] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:06:57.665 [2024-07-15 14:41:31.567211] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2689703 ] 00:06:57.923 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.923 [2024-07-15 14:41:31.620937] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:57.923 [2024-07-15 14:41:31.691156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:57.923 [2024-07-15 14:41:31.691253] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:57.923 [2024-07-15 14:41:31.691255] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.488 14:41:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:58.488 14:41:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:58.488 14:41:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2689935 00:06:58.488 14:41:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2689935 /var/tmp/spdk2.sock 00:06:58.488 14:41:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:58.488 14:41:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:58.488 14:41:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2689935 /var/tmp/spdk2.sock 00:06:58.488 14:41:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:58.488 14:41:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:58.488 14:41:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:58.488 14:41:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:58.488 14:41:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 2689935 /var/tmp/spdk2.sock 00:06:58.488 14:41:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 2689935 ']' 00:06:58.488 14:41:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:58.488 14:41:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:58.488 14:41:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:58.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:58.488 14:41:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:58.488 14:41:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:58.488 [2024-07-15 14:41:32.405329] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:06:58.488 [2024-07-15 14:41:32.405377] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2689935 ] 00:06:58.746 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.746 [2024-07-15 14:41:32.479270] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2689703 has claimed it. 00:06:58.746 [2024-07-15 14:41:32.479308] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:59.311 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (2689935) - No such process 00:06:59.311 ERROR: process (pid: 2689935) is no longer running 00:06:59.311 14:41:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:59.311 14:41:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:59.311 14:41:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:59.311 14:41:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:59.311 14:41:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:59.311 14:41:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:59.311 14:41:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:59.311 14:41:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:59.311 14:41:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:59.311 14:41:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:59.311 14:41:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2689703 00:06:59.311 14:41:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 2689703 ']' 00:06:59.311 14:41:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 2689703 00:06:59.311 14:41:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:06:59.311 14:41:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:59.311 14:41:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2689703 00:06:59.311 14:41:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:59.311 14:41:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:59.311 14:41:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2689703' 00:06:59.311 killing process with pid 2689703 00:06:59.311 14:41:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 2689703 00:06:59.311 14:41:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 2689703 00:06:59.570 00:06:59.570 real 0m1.875s 00:06:59.570 user 0m5.309s 00:06:59.570 sys 0m0.380s 00:06:59.570 14:41:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:59.570 14:41:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:59.570 ************************************ 00:06:59.570 END TEST locking_overlapped_coremask 00:06:59.570 ************************************ 00:06:59.570 14:41:33 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:59.570 14:41:33 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:59.570 14:41:33 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:59.570 14:41:33 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.570 14:41:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:59.570 ************************************ 00:06:59.570 START TEST locking_overlapped_coremask_via_rpc 00:06:59.570 ************************************ 00:06:59.570 14:41:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:06:59.570 14:41:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2690035 00:06:59.570 14:41:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2690035 /var/tmp/spdk.sock 00:06:59.570 14:41:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:59.570 14:41:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2690035 ']' 00:06:59.570 14:41:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.570 14:41:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:59.570 14:41:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.570 14:41:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:59.570 14:41:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:59.827 [2024-07-15 14:41:33.505338] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:06:59.827 [2024-07-15 14:41:33.505380] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2690035 ] 00:06:59.827 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.827 [2024-07-15 14:41:33.562177] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:59.827 [2024-07-15 14:41:33.562200] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:59.827 [2024-07-15 14:41:33.641948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:59.827 [2024-07-15 14:41:33.642056] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:59.827 [2024-07-15 14:41:33.642058] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.393 14:41:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:00.393 14:41:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:00.393 14:41:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2690209 00:07:00.393 14:41:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2690209 /var/tmp/spdk2.sock 00:07:00.393 14:41:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:00.393 14:41:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2690209 ']' 00:07:00.393 14:41:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:00.393 14:41:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:00.393 14:41:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:00.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:00.393 14:41:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:00.393 14:41:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:00.650 [2024-07-15 14:41:34.345676] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:07:00.650 [2024-07-15 14:41:34.345725] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2690209 ] 00:07:00.650 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.650 [2024-07-15 14:41:34.420980] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:00.650 [2024-07-15 14:41:34.421005] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:00.650 [2024-07-15 14:41:34.566704] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:00.907 [2024-07-15 14:41:34.570585] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:00.907 [2024-07-15 14:41:34.570586] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:01.472 14:41:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:01.472 14:41:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:01.472 14:41:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:01.472 14:41:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.472 14:41:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.472 14:41:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.472 14:41:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:01.472 14:41:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:01.472 14:41:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:01.472 14:41:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:07:01.472 14:41:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:01.472 14:41:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:07:01.472 14:41:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:01.472 14:41:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:01.472 14:41:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.472 14:41:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.472 [2024-07-15 14:41:35.181614] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2690035 has claimed it. 00:07:01.472 request: 00:07:01.472 { 00:07:01.472 "method": "framework_enable_cpumask_locks", 00:07:01.472 "req_id": 1 00:07:01.472 } 00:07:01.472 Got JSON-RPC error response 00:07:01.472 response: 00:07:01.472 { 00:07:01.472 "code": -32603, 00:07:01.472 "message": "Failed to claim CPU core: 2" 00:07:01.472 } 00:07:01.472 14:41:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:07:01.472 14:41:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:01.472 14:41:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:01.472 14:41:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:01.472 14:41:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:01.472 14:41:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2690035 /var/tmp/spdk.sock 00:07:01.472 14:41:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2690035 ']' 00:07:01.472 14:41:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.472 14:41:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:01.472 14:41:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.472 14:41:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:01.472 14:41:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.472 14:41:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:01.472 14:41:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:01.472 14:41:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2690209 /var/tmp/spdk2.sock 00:07:01.472 14:41:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2690209 ']' 00:07:01.472 14:41:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:01.472 14:41:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:01.472 14:41:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:01.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:01.472 14:41:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:01.472 14:41:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.730 14:41:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:01.730 14:41:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:01.730 14:41:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:01.730 14:41:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:01.730 14:41:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:01.730 14:41:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:01.730 00:07:01.730 real 0m2.091s 00:07:01.730 user 0m0.869s 00:07:01.730 sys 0m0.148s 00:07:01.730 14:41:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:01.730 14:41:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.730 ************************************ 00:07:01.730 END TEST locking_overlapped_coremask_via_rpc 00:07:01.730 ************************************ 00:07:01.730 14:41:35 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:01.730 14:41:35 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:01.730 14:41:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2690035 ]] 00:07:01.730 14:41:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2690035 00:07:01.730 14:41:35 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2690035 ']' 00:07:01.730 14:41:35 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2690035 00:07:01.730 14:41:35 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:07:01.730 14:41:35 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:01.730 14:41:35 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2690035 00:07:01.730 14:41:35 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:01.730 14:41:35 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:01.730 14:41:35 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2690035' 00:07:01.730 killing process with pid 2690035 00:07:01.730 14:41:35 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 2690035 00:07:01.730 14:41:35 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 2690035 00:07:02.295 14:41:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2690209 ]] 00:07:02.295 14:41:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2690209 00:07:02.295 14:41:35 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2690209 ']' 00:07:02.295 14:41:35 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2690209 00:07:02.295 14:41:35 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:07:02.295 14:41:35 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:02.295 14:41:35 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2690209 00:07:02.295 14:41:35 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:07:02.295 14:41:35 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:07:02.295 14:41:35 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2690209' 00:07:02.295 killing process with pid 2690209 00:07:02.295 14:41:35 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 2690209 00:07:02.295 14:41:35 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 2690209 00:07:02.553 14:41:36 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:02.553 14:41:36 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:02.553 14:41:36 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2690035 ]] 00:07:02.553 14:41:36 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2690035 00:07:02.553 14:41:36 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2690035 ']' 00:07:02.553 14:41:36 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2690035 00:07:02.553 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2690035) - No such process 00:07:02.553 14:41:36 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 2690035 is not found' 00:07:02.553 Process with pid 2690035 is not found 00:07:02.553 14:41:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2690209 ]] 00:07:02.553 14:41:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2690209 00:07:02.553 14:41:36 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2690209 ']' 00:07:02.553 14:41:36 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2690209 00:07:02.553 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2690209) - No such process 00:07:02.553 14:41:36 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 2690209 is not found' 00:07:02.553 Process with pid 2690209 is not found 00:07:02.553 14:41:36 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:02.553 00:07:02.553 real 0m16.010s 00:07:02.553 user 0m28.269s 00:07:02.553 sys 0m4.333s 00:07:02.553 14:41:36 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:02.553 14:41:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:02.553 ************************************ 00:07:02.553 END TEST cpu_locks 00:07:02.553 ************************************ 00:07:02.553 14:41:36 event -- common/autotest_common.sh@1142 -- # return 0 00:07:02.553 00:07:02.553 real 0m40.659s 00:07:02.553 user 1m18.126s 00:07:02.553 sys 0m7.522s 00:07:02.553 14:41:36 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:02.553 14:41:36 event -- common/autotest_common.sh@10 -- # set +x 00:07:02.553 ************************************ 00:07:02.553 END TEST event 00:07:02.553 ************************************ 00:07:02.553 14:41:36 -- common/autotest_common.sh@1142 -- # return 0 00:07:02.553 14:41:36 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:07:02.553 14:41:36 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:02.553 14:41:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.553 14:41:36 -- common/autotest_common.sh@10 -- # set +x 00:07:02.553 ************************************ 00:07:02.553 START TEST thread 00:07:02.553 ************************************ 00:07:02.553 14:41:36 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:07:02.553 * Looking for test storage... 00:07:02.810 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread 00:07:02.810 14:41:36 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:02.810 14:41:36 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:02.810 14:41:36 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.810 14:41:36 thread -- common/autotest_common.sh@10 -- # set +x 00:07:02.810 ************************************ 00:07:02.810 START TEST thread_poller_perf 00:07:02.810 ************************************ 00:07:02.810 14:41:36 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:02.810 [2024-07-15 14:41:36.533575] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:07:02.810 [2024-07-15 14:41:36.533642] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2690760 ] 00:07:02.810 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.810 [2024-07-15 14:41:36.592856] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.810 [2024-07-15 14:41:36.665590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.810 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:04.190 ====================================== 00:07:04.190 busy:2106449774 (cyc) 00:07:04.190 total_run_count: 422000 00:07:04.190 tsc_hz: 2100000000 (cyc) 00:07:04.190 ====================================== 00:07:04.190 poller_cost: 4991 (cyc), 2376 (nsec) 00:07:04.190 00:07:04.190 real 0m1.227s 00:07:04.190 user 0m1.141s 00:07:04.190 sys 0m0.083s 00:07:04.190 14:41:37 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:04.190 14:41:37 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:04.190 ************************************ 00:07:04.190 END TEST thread_poller_perf 00:07:04.190 ************************************ 00:07:04.190 14:41:37 thread -- common/autotest_common.sh@1142 -- # return 0 00:07:04.190 14:41:37 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:04.190 14:41:37 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:04.190 14:41:37 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:04.190 14:41:37 thread -- common/autotest_common.sh@10 -- # set +x 00:07:04.190 ************************************ 00:07:04.190 START TEST thread_poller_perf 00:07:04.190 ************************************ 00:07:04.190 14:41:37 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:04.190 [2024-07-15 14:41:37.830796] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:07:04.190 [2024-07-15 14:41:37.830869] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2690987 ] 00:07:04.190 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.190 [2024-07-15 14:41:37.889105] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.190 [2024-07-15 14:41:37.958202] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.190 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:05.122 ====================================== 00:07:05.122 busy:2101604180 (cyc) 00:07:05.122 total_run_count: 5627000 00:07:05.122 tsc_hz: 2100000000 (cyc) 00:07:05.122 ====================================== 00:07:05.122 poller_cost: 373 (cyc), 177 (nsec) 00:07:05.122 00:07:05.122 real 0m1.217s 00:07:05.122 user 0m1.141s 00:07:05.122 sys 0m0.072s 00:07:05.122 14:41:39 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:05.122 14:41:39 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:05.122 ************************************ 00:07:05.122 END TEST thread_poller_perf 00:07:05.122 ************************************ 00:07:05.380 14:41:39 thread -- common/autotest_common.sh@1142 -- # return 0 00:07:05.380 14:41:39 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:05.380 00:07:05.380 real 0m2.656s 00:07:05.380 user 0m2.373s 00:07:05.380 sys 0m0.291s 00:07:05.380 14:41:39 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:05.380 14:41:39 thread -- common/autotest_common.sh@10 -- # set +x 00:07:05.380 ************************************ 00:07:05.380 END TEST thread 00:07:05.380 ************************************ 00:07:05.380 14:41:39 -- common/autotest_common.sh@1142 -- # return 0 00:07:05.380 14:41:39 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel.sh 00:07:05.380 14:41:39 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:05.380 14:41:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.380 14:41:39 -- common/autotest_common.sh@10 -- # set +x 00:07:05.380 ************************************ 00:07:05.380 START TEST accel 00:07:05.380 ************************************ 00:07:05.380 14:41:39 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel.sh 00:07:05.380 * Looking for test storage... 00:07:05.380 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel 00:07:05.380 14:41:39 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:07:05.380 14:41:39 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:07:05.380 14:41:39 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:05.380 14:41:39 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=2691292 00:07:05.380 14:41:39 accel -- accel/accel.sh@63 -- # waitforlisten 2691292 00:07:05.380 14:41:39 accel -- common/autotest_common.sh@829 -- # '[' -z 2691292 ']' 00:07:05.380 14:41:39 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.380 14:41:39 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:05.380 14:41:39 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:05.380 14:41:39 accel -- accel/accel.sh@61 -- # build_accel_config 00:07:05.380 14:41:39 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.380 14:41:39 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:05.380 14:41:39 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:05.380 14:41:39 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:05.380 14:41:39 accel -- common/autotest_common.sh@10 -- # set +x 00:07:05.380 14:41:39 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.380 14:41:39 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.380 14:41:39 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:05.380 14:41:39 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:05.380 14:41:39 accel -- accel/accel.sh@41 -- # jq -r . 00:07:05.380 [2024-07-15 14:41:39.270305] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:07:05.380 [2024-07-15 14:41:39.270351] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2691292 ] 00:07:05.380 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.638 [2024-07-15 14:41:39.324318] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.638 [2024-07-15 14:41:39.403278] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.202 14:41:40 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:06.202 14:41:40 accel -- common/autotest_common.sh@862 -- # return 0 00:07:06.202 14:41:40 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:07:06.202 14:41:40 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:07:06.202 14:41:40 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:07:06.202 14:41:40 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:07:06.202 14:41:40 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:06.202 14:41:40 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:07:06.202 14:41:40 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:06.202 14:41:40 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.202 14:41:40 accel -- common/autotest_common.sh@10 -- # set +x 00:07:06.202 14:41:40 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.202 14:41:40 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:06.202 14:41:40 accel -- accel/accel.sh@72 -- # IFS== 00:07:06.202 14:41:40 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:06.202 14:41:40 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:06.202 14:41:40 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:06.202 14:41:40 accel -- accel/accel.sh@72 -- # IFS== 00:07:06.202 14:41:40 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:06.202 14:41:40 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:06.202 14:41:40 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:06.202 14:41:40 accel -- accel/accel.sh@72 -- # IFS== 00:07:06.202 14:41:40 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:06.202 14:41:40 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:06.202 14:41:40 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:06.202 14:41:40 accel -- accel/accel.sh@72 -- # IFS== 00:07:06.202 14:41:40 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:06.202 14:41:40 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:06.202 14:41:40 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:06.202 14:41:40 accel -- accel/accel.sh@72 -- # IFS== 00:07:06.202 14:41:40 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:06.202 14:41:40 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:06.202 14:41:40 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:06.202 14:41:40 accel -- accel/accel.sh@72 -- # IFS== 00:07:06.202 14:41:40 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:06.202 14:41:40 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:06.202 14:41:40 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:06.202 14:41:40 accel -- accel/accel.sh@72 -- # IFS== 00:07:06.202 14:41:40 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:06.202 14:41:40 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:06.202 14:41:40 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:06.202 14:41:40 accel -- accel/accel.sh@72 -- # IFS== 00:07:06.202 14:41:40 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:06.202 14:41:40 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:06.202 14:41:40 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:06.202 14:41:40 accel -- accel/accel.sh@72 -- # IFS== 00:07:06.460 14:41:40 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:06.460 14:41:40 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:06.460 14:41:40 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:06.460 14:41:40 accel -- accel/accel.sh@72 -- # IFS== 00:07:06.460 14:41:40 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:06.460 14:41:40 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:06.460 14:41:40 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:06.460 14:41:40 accel -- accel/accel.sh@72 -- # IFS== 00:07:06.460 14:41:40 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:06.460 14:41:40 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:06.460 14:41:40 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:06.460 14:41:40 accel -- accel/accel.sh@72 -- # IFS== 00:07:06.460 14:41:40 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:06.460 14:41:40 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:06.460 14:41:40 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:06.460 14:41:40 accel -- accel/accel.sh@72 -- # IFS== 00:07:06.460 14:41:40 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:06.460 14:41:40 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:06.460 14:41:40 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:06.460 14:41:40 accel -- accel/accel.sh@72 -- # IFS== 00:07:06.460 14:41:40 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:06.460 14:41:40 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:06.460 14:41:40 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:06.460 14:41:40 accel -- accel/accel.sh@72 -- # IFS== 00:07:06.460 14:41:40 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:06.460 14:41:40 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:06.460 14:41:40 accel -- accel/accel.sh@75 -- # killprocess 2691292 00:07:06.460 14:41:40 accel -- common/autotest_common.sh@948 -- # '[' -z 2691292 ']' 00:07:06.460 14:41:40 accel -- common/autotest_common.sh@952 -- # kill -0 2691292 00:07:06.460 14:41:40 accel -- common/autotest_common.sh@953 -- # uname 00:07:06.460 14:41:40 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:06.460 14:41:40 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2691292 00:07:06.460 14:41:40 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:06.460 14:41:40 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:06.460 14:41:40 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2691292' 00:07:06.460 killing process with pid 2691292 00:07:06.460 14:41:40 accel -- common/autotest_common.sh@967 -- # kill 2691292 00:07:06.460 14:41:40 accel -- common/autotest_common.sh@972 -- # wait 2691292 00:07:06.718 14:41:40 accel -- accel/accel.sh@76 -- # trap - ERR 00:07:06.718 14:41:40 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:07:06.718 14:41:40 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:06.718 14:41:40 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.718 14:41:40 accel -- common/autotest_common.sh@10 -- # set +x 00:07:06.718 14:41:40 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:07:06.718 14:41:40 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:06.718 14:41:40 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:07:06.718 14:41:40 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:06.718 14:41:40 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:06.718 14:41:40 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.718 14:41:40 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.718 14:41:40 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:06.718 14:41:40 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:07:06.718 14:41:40 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:07:06.718 14:41:40 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:06.718 14:41:40 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:07:06.718 14:41:40 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:06.718 14:41:40 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:06.718 14:41:40 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:06.718 14:41:40 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.718 14:41:40 accel -- common/autotest_common.sh@10 -- # set +x 00:07:06.718 ************************************ 00:07:06.718 START TEST accel_missing_filename 00:07:06.718 ************************************ 00:07:06.718 14:41:40 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:07:06.718 14:41:40 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:07:06.718 14:41:40 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:06.718 14:41:40 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:06.718 14:41:40 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:06.718 14:41:40 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:06.718 14:41:40 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:06.718 14:41:40 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:07:06.718 14:41:40 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:06.718 14:41:40 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:07:06.718 14:41:40 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:06.718 14:41:40 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:06.718 14:41:40 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.718 14:41:40 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.718 14:41:40 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:06.718 14:41:40 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:07:06.718 14:41:40 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:07:06.718 [2024-07-15 14:41:40.633569] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:07:06.718 [2024-07-15 14:41:40.633636] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2691566 ] 00:07:06.976 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.976 [2024-07-15 14:41:40.689269] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.976 [2024-07-15 14:41:40.760426] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.976 [2024-07-15 14:41:40.801125] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:06.976 [2024-07-15 14:41:40.860502] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:07:07.234 A filename is required. 00:07:07.235 14:41:40 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:07:07.235 14:41:40 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:07.235 14:41:40 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:07:07.235 14:41:40 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:07:07.235 14:41:40 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:07:07.235 14:41:40 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:07.235 00:07:07.235 real 0m0.326s 00:07:07.235 user 0m0.250s 00:07:07.235 sys 0m0.114s 00:07:07.235 14:41:40 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:07.235 14:41:40 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:07:07.235 ************************************ 00:07:07.235 END TEST accel_missing_filename 00:07:07.235 ************************************ 00:07:07.235 14:41:40 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:07.235 14:41:40 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:07.235 14:41:40 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:07.235 14:41:40 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.235 14:41:40 accel -- common/autotest_common.sh@10 -- # set +x 00:07:07.235 ************************************ 00:07:07.235 START TEST accel_compress_verify 00:07:07.235 ************************************ 00:07:07.235 14:41:40 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:07.235 14:41:40 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:07:07.235 14:41:40 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:07.235 14:41:40 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:07.235 14:41:40 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:07.235 14:41:40 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:07.235 14:41:40 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:07.235 14:41:40 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:07.235 14:41:40 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:07.235 14:41:40 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:07.235 14:41:40 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:07.235 14:41:40 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:07.235 14:41:40 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.235 14:41:40 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.235 14:41:40 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:07.235 14:41:40 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:07.235 14:41:40 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:07:07.235 [2024-07-15 14:41:41.019182] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:07:07.235 [2024-07-15 14:41:41.019228] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2691592 ] 00:07:07.235 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.235 [2024-07-15 14:41:41.073785] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.235 [2024-07-15 14:41:41.145065] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.493 [2024-07-15 14:41:41.186013] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:07.493 [2024-07-15 14:41:41.245672] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:07:07.493 00:07:07.493 Compression does not support the verify option, aborting. 00:07:07.493 14:41:41 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:07:07.493 14:41:41 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:07.493 14:41:41 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:07:07.493 14:41:41 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:07:07.493 14:41:41 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:07:07.493 14:41:41 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:07.493 00:07:07.493 real 0m0.325s 00:07:07.493 user 0m0.251s 00:07:07.493 sys 0m0.112s 00:07:07.493 14:41:41 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:07.493 14:41:41 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:07:07.493 ************************************ 00:07:07.493 END TEST accel_compress_verify 00:07:07.493 ************************************ 00:07:07.493 14:41:41 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:07.493 14:41:41 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:07.493 14:41:41 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:07.493 14:41:41 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.493 14:41:41 accel -- common/autotest_common.sh@10 -- # set +x 00:07:07.493 ************************************ 00:07:07.493 START TEST accel_wrong_workload 00:07:07.493 ************************************ 00:07:07.493 14:41:41 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:07:07.493 14:41:41 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:07:07.493 14:41:41 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:07.493 14:41:41 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:07.493 14:41:41 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:07.493 14:41:41 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:07.493 14:41:41 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:07.493 14:41:41 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:07:07.493 14:41:41 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:07.493 14:41:41 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:07:07.493 14:41:41 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:07.494 14:41:41 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:07.494 14:41:41 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.494 14:41:41 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.494 14:41:41 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:07.494 14:41:41 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:07:07.494 14:41:41 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:07:07.494 Unsupported workload type: foobar 00:07:07.494 [2024-07-15 14:41:41.409048] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:07.750 accel_perf options: 00:07:07.750 [-h help message] 00:07:07.750 [-q queue depth per core] 00:07:07.750 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:07.750 [-T number of threads per core 00:07:07.750 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:07.750 [-t time in seconds] 00:07:07.750 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:07.750 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:07.750 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:07.750 [-l for compress/decompress workloads, name of uncompressed input file 00:07:07.750 [-S for crc32c workload, use this seed value (default 0) 00:07:07.750 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:07.750 [-f for fill workload, use this BYTE value (default 255) 00:07:07.750 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:07.750 [-y verify result if this switch is on] 00:07:07.750 [-a tasks to allocate per core (default: same value as -q)] 00:07:07.750 Can be used to spread operations across a wider range of memory. 00:07:07.750 14:41:41 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:07:07.750 14:41:41 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:07.750 14:41:41 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:07.750 14:41:41 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:07.750 00:07:07.750 real 0m0.034s 00:07:07.750 user 0m0.025s 00:07:07.750 sys 0m0.008s 00:07:07.750 14:41:41 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:07.750 14:41:41 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:07:07.750 ************************************ 00:07:07.750 END TEST accel_wrong_workload 00:07:07.750 ************************************ 00:07:07.750 Error: writing output failed: Broken pipe 00:07:07.750 14:41:41 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:07.750 14:41:41 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:07.750 14:41:41 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:07.750 14:41:41 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.750 14:41:41 accel -- common/autotest_common.sh@10 -- # set +x 00:07:07.750 ************************************ 00:07:07.750 START TEST accel_negative_buffers 00:07:07.750 ************************************ 00:07:07.750 14:41:41 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:07.750 14:41:41 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:07:07.750 14:41:41 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:07.750 14:41:41 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:07.750 14:41:41 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:07.750 14:41:41 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:07.750 14:41:41 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:07.750 14:41:41 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:07:07.750 14:41:41 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:07.750 14:41:41 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:07:07.750 14:41:41 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:07.750 14:41:41 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:07.750 14:41:41 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.750 14:41:41 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.750 14:41:41 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:07.750 14:41:41 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:07:07.750 14:41:41 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:07:07.750 -x option must be non-negative. 00:07:07.750 [2024-07-15 14:41:41.512199] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:07.750 accel_perf options: 00:07:07.750 [-h help message] 00:07:07.750 [-q queue depth per core] 00:07:07.750 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:07.750 [-T number of threads per core 00:07:07.750 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:07.750 [-t time in seconds] 00:07:07.750 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:07.750 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:07.750 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:07.750 [-l for compress/decompress workloads, name of uncompressed input file 00:07:07.750 [-S for crc32c workload, use this seed value (default 0) 00:07:07.750 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:07.750 [-f for fill workload, use this BYTE value (default 255) 00:07:07.750 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:07.750 [-y verify result if this switch is on] 00:07:07.750 [-a tasks to allocate per core (default: same value as -q)] 00:07:07.750 Can be used to spread operations across a wider range of memory. 00:07:07.750 14:41:41 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:07:07.750 14:41:41 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:07.750 14:41:41 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:07.750 14:41:41 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:07.750 00:07:07.750 real 0m0.035s 00:07:07.750 user 0m0.020s 00:07:07.750 sys 0m0.014s 00:07:07.750 14:41:41 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:07.750 14:41:41 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:07:07.750 ************************************ 00:07:07.750 END TEST accel_negative_buffers 00:07:07.750 ************************************ 00:07:07.751 Error: writing output failed: Broken pipe 00:07:07.751 14:41:41 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:07.751 14:41:41 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:07.751 14:41:41 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:07.751 14:41:41 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.751 14:41:41 accel -- common/autotest_common.sh@10 -- # set +x 00:07:07.751 ************************************ 00:07:07.751 START TEST accel_crc32c 00:07:07.751 ************************************ 00:07:07.751 14:41:41 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:07.751 14:41:41 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:07.751 14:41:41 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:07.751 14:41:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.751 14:41:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.751 14:41:41 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:07.751 14:41:41 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:07.751 14:41:41 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:07.751 14:41:41 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:07.751 14:41:41 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:07.751 14:41:41 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.751 14:41:41 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.751 14:41:41 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:07.751 14:41:41 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:07.751 14:41:41 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:07.751 [2024-07-15 14:41:41.610969] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:07:07.751 [2024-07-15 14:41:41.611043] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2691658 ] 00:07:07.751 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.008 [2024-07-15 14:41:41.674058] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.008 [2024-07-15 14:41:41.764650] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.008 14:41:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:08.008 14:41:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:08.008 14:41:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:08.008 14:41:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:08.008 14:41:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:08.008 14:41:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:08.008 14:41:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:08.008 14:41:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:08.008 14:41:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:08.008 14:41:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:08.008 14:41:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:08.008 14:41:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:08.008 14:41:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:08.008 14:41:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:08.008 14:41:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:08.008 14:41:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:08.008 14:41:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:08.008 14:41:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:08.008 14:41:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:08.008 14:41:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:08.008 14:41:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:07:08.008 14:41:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:08.008 14:41:41 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:08.008 14:41:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:08.008 14:41:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:08.008 14:41:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:08.008 14:41:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:08.008 14:41:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:08.008 14:41:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:08.008 14:41:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:08.008 14:41:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:08.008 14:41:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:08.008 14:41:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:08.008 14:41:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:08.008 14:41:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:08.008 14:41:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:08.008 14:41:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:08.008 14:41:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:07:08.008 14:41:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:08.008 14:41:41 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:08.008 14:41:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:08.008 14:41:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:08.008 14:41:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:08.008 14:41:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:08.008 14:41:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:08.008 14:41:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:08.008 14:41:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:08.008 14:41:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:08.008 14:41:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:08.008 14:41:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:08.008 14:41:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:07:08.008 14:41:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:08.008 14:41:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:08.008 14:41:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:08.008 14:41:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:08.008 14:41:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:08.008 14:41:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:08.008 14:41:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:08.008 14:41:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:08.008 14:41:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:08.008 14:41:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:08.008 14:41:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:08.008 14:41:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:08.008 14:41:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:08.008 14:41:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:08.008 14:41:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:08.008 14:41:41 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:08.008 14:41:41 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:08.008 14:41:41 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:08.008 14:41:41 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.383 14:41:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:09.383 14:41:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.383 14:41:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.383 14:41:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.383 14:41:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:09.383 14:41:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.383 14:41:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.383 14:41:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.383 14:41:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:09.383 14:41:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.383 14:41:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.383 14:41:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.383 14:41:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:09.383 14:41:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.383 14:41:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.383 14:41:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.383 14:41:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:09.383 14:41:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.383 14:41:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.383 14:41:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.383 14:41:42 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:09.383 14:41:42 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:09.383 14:41:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:09.383 14:41:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:09.383 14:41:42 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:09.383 14:41:42 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:09.383 14:41:42 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:09.383 00:07:09.383 real 0m1.364s 00:07:09.383 user 0m1.253s 00:07:09.383 sys 0m0.122s 00:07:09.383 14:41:42 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:09.383 14:41:42 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:09.383 ************************************ 00:07:09.383 END TEST accel_crc32c 00:07:09.383 ************************************ 00:07:09.383 14:41:42 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:09.383 14:41:42 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:09.383 14:41:42 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:09.383 14:41:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.383 14:41:42 accel -- common/autotest_common.sh@10 -- # set +x 00:07:09.383 ************************************ 00:07:09.383 START TEST accel_crc32c_C2 00:07:09.383 ************************************ 00:07:09.383 14:41:43 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:09.383 14:41:43 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:09.383 14:41:43 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:09.383 14:41:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:09.383 14:41:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:09.383 14:41:43 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:09.383 14:41:43 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:09.383 14:41:43 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:09.383 14:41:43 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:09.383 14:41:43 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:09.383 14:41:43 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.383 14:41:43 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.383 14:41:43 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:09.383 14:41:43 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:09.383 14:41:43 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:09.383 [2024-07-15 14:41:43.038507] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:07:09.383 [2024-07-15 14:41:43.038586] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2691928 ] 00:07:09.383 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.383 [2024-07-15 14:41:43.095608] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.383 [2024-07-15 14:41:43.167889] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.383 14:41:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:09.383 14:41:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.383 14:41:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:09.383 14:41:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:09.383 14:41:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:09.383 14:41:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.383 14:41:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:09.383 14:41:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:09.383 14:41:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:09.383 14:41:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.383 14:41:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:09.383 14:41:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:09.383 14:41:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:09.384 14:41:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.384 14:41:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:09.384 14:41:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:09.384 14:41:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:09.384 14:41:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.384 14:41:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:09.384 14:41:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:09.384 14:41:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:07:09.384 14:41:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.384 14:41:43 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:09.384 14:41:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:09.384 14:41:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:09.384 14:41:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:09.384 14:41:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.384 14:41:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:09.384 14:41:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:09.384 14:41:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:09.384 14:41:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.384 14:41:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:09.384 14:41:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:09.384 14:41:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:09.384 14:41:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.384 14:41:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:09.384 14:41:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:09.384 14:41:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:09.384 14:41:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.384 14:41:43 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:09.384 14:41:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:09.384 14:41:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:09.384 14:41:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:09.384 14:41:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.384 14:41:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:09.384 14:41:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:09.384 14:41:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:09.384 14:41:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.384 14:41:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:09.384 14:41:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:09.384 14:41:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:09.384 14:41:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.384 14:41:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:09.384 14:41:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:09.384 14:41:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:09.384 14:41:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.384 14:41:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:09.384 14:41:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:09.384 14:41:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:09.384 14:41:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.384 14:41:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:09.384 14:41:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:09.384 14:41:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:09.384 14:41:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.384 14:41:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:09.384 14:41:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:09.384 14:41:43 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:09.384 14:41:43 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.384 14:41:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:09.384 14:41:43 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.762 14:41:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:10.762 14:41:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.762 14:41:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.762 14:41:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.762 14:41:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:10.762 14:41:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.762 14:41:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.762 14:41:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.762 14:41:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:10.762 14:41:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.762 14:41:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.762 14:41:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.762 14:41:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:10.762 14:41:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.762 14:41:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.762 14:41:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.762 14:41:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:10.762 14:41:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.762 14:41:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.762 14:41:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.762 14:41:44 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:10.762 14:41:44 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:10.762 14:41:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:10.762 14:41:44 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:10.762 14:41:44 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:10.762 14:41:44 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:10.762 14:41:44 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:10.762 00:07:10.762 real 0m1.338s 00:07:10.762 user 0m1.238s 00:07:10.762 sys 0m0.114s 00:07:10.762 14:41:44 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:10.762 14:41:44 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:10.762 ************************************ 00:07:10.762 END TEST accel_crc32c_C2 00:07:10.762 ************************************ 00:07:10.762 14:41:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:10.762 14:41:44 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:10.762 14:41:44 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:10.762 14:41:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.762 14:41:44 accel -- common/autotest_common.sh@10 -- # set +x 00:07:10.762 ************************************ 00:07:10.762 START TEST accel_copy 00:07:10.762 ************************************ 00:07:10.762 14:41:44 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:07:10.762 14:41:44 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:10.762 14:41:44 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:07:10.762 14:41:44 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:10.762 14:41:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.762 14:41:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.762 14:41:44 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:10.762 14:41:44 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:10.762 14:41:44 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:10.762 14:41:44 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:10.762 14:41:44 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.762 14:41:44 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.762 14:41:44 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:10.762 14:41:44 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:10.762 14:41:44 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:07:10.762 [2024-07-15 14:41:44.426305] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:07:10.762 [2024-07-15 14:41:44.426342] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2692197 ] 00:07:10.762 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.762 [2024-07-15 14:41:44.480313] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.762 [2024-07-15 14:41:44.557845] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.762 14:41:44 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:10.762 14:41:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.762 14:41:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.762 14:41:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.762 14:41:44 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:10.762 14:41:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.762 14:41:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.762 14:41:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.762 14:41:44 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:07:10.762 14:41:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.762 14:41:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.762 14:41:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.762 14:41:44 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:10.762 14:41:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.762 14:41:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.762 14:41:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.762 14:41:44 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:10.762 14:41:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.762 14:41:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.762 14:41:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.762 14:41:44 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:07:10.762 14:41:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.762 14:41:44 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:07:10.762 14:41:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.762 14:41:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.762 14:41:44 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:10.762 14:41:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.762 14:41:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.762 14:41:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.762 14:41:44 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:10.762 14:41:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.762 14:41:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.762 14:41:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.762 14:41:44 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:07:10.762 14:41:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.762 14:41:44 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:10.762 14:41:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.762 14:41:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.762 14:41:44 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:10.762 14:41:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.762 14:41:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.762 14:41:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.762 14:41:44 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:10.762 14:41:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.762 14:41:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.762 14:41:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.762 14:41:44 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:07:10.762 14:41:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.762 14:41:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.762 14:41:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.762 14:41:44 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:10.762 14:41:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.762 14:41:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.762 14:41:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.762 14:41:44 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:07:10.762 14:41:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.762 14:41:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.762 14:41:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.762 14:41:44 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:10.762 14:41:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.762 14:41:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.762 14:41:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:10.762 14:41:44 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:10.762 14:41:44 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:10.762 14:41:44 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:10.762 14:41:44 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:12.137 14:41:45 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:12.137 14:41:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:12.138 14:41:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:12.138 14:41:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:12.138 14:41:45 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:12.138 14:41:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:12.138 14:41:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:12.138 14:41:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:12.138 14:41:45 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:12.138 14:41:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:12.138 14:41:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:12.138 14:41:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:12.138 14:41:45 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:12.138 14:41:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:12.138 14:41:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:12.138 14:41:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:12.138 14:41:45 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:12.138 14:41:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:12.138 14:41:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:12.138 14:41:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:12.138 14:41:45 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:12.138 14:41:45 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:12.138 14:41:45 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:12.138 14:41:45 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:12.138 14:41:45 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:12.138 14:41:45 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:12.138 14:41:45 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:12.138 00:07:12.138 real 0m1.326s 00:07:12.138 user 0m1.237s 00:07:12.138 sys 0m0.104s 00:07:12.138 14:41:45 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:12.138 14:41:45 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:07:12.138 ************************************ 00:07:12.138 END TEST accel_copy 00:07:12.138 ************************************ 00:07:12.138 14:41:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:12.138 14:41:45 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:12.138 14:41:45 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:12.138 14:41:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:12.138 14:41:45 accel -- common/autotest_common.sh@10 -- # set +x 00:07:12.138 ************************************ 00:07:12.138 START TEST accel_fill 00:07:12.138 ************************************ 00:07:12.138 14:41:45 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:12.138 14:41:45 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:07:12.138 14:41:45 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:07:12.138 14:41:45 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:12.138 14:41:45 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:12.138 14:41:45 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:12.138 14:41:45 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:12.138 14:41:45 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:07:12.138 14:41:45 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:12.138 14:41:45 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:12.138 14:41:45 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.138 14:41:45 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.138 14:41:45 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:12.138 14:41:45 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:07:12.138 14:41:45 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:07:12.138 [2024-07-15 14:41:45.830549] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:07:12.138 [2024-07-15 14:41:45.830596] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2692476 ] 00:07:12.138 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.138 [2024-07-15 14:41:45.886766] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.138 [2024-07-15 14:41:45.963115] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.138 14:41:46 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:12.138 14:41:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:12.138 14:41:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:12.138 14:41:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:12.138 14:41:46 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:12.138 14:41:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:12.138 14:41:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:12.138 14:41:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:12.138 14:41:46 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:07:12.138 14:41:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:12.138 14:41:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:12.138 14:41:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:12.138 14:41:46 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:12.138 14:41:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:12.138 14:41:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:12.138 14:41:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:12.138 14:41:46 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:12.138 14:41:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:12.138 14:41:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:12.138 14:41:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:12.138 14:41:46 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:07:12.138 14:41:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:12.138 14:41:46 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:07:12.138 14:41:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:12.138 14:41:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:12.138 14:41:46 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:07:12.138 14:41:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:12.138 14:41:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:12.138 14:41:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:12.138 14:41:46 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:12.138 14:41:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:12.138 14:41:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:12.138 14:41:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:12.138 14:41:46 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:12.138 14:41:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:12.138 14:41:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:12.138 14:41:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:12.138 14:41:46 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:07:12.138 14:41:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:12.138 14:41:46 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:07:12.138 14:41:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:12.138 14:41:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:12.138 14:41:46 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:12.138 14:41:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:12.138 14:41:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:12.138 14:41:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:12.138 14:41:46 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:12.138 14:41:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:12.138 14:41:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:12.138 14:41:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:12.138 14:41:46 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:07:12.138 14:41:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:12.138 14:41:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:12.138 14:41:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:12.138 14:41:46 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:07:12.138 14:41:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:12.138 14:41:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:12.138 14:41:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:12.138 14:41:46 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:07:12.138 14:41:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:12.138 14:41:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:12.138 14:41:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:12.138 14:41:46 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:12.138 14:41:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:12.138 14:41:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:12.138 14:41:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:12.138 14:41:46 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:12.138 14:41:46 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:12.138 14:41:46 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:12.138 14:41:46 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:13.513 14:41:47 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:13.513 14:41:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:13.513 14:41:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:13.513 14:41:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:13.513 14:41:47 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:13.513 14:41:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:13.513 14:41:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:13.513 14:41:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:13.513 14:41:47 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:13.513 14:41:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:13.513 14:41:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:13.513 14:41:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:13.513 14:41:47 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:13.513 14:41:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:13.513 14:41:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:13.513 14:41:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:13.513 14:41:47 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:13.513 14:41:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:13.513 14:41:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:13.513 14:41:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:13.513 14:41:47 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:13.513 14:41:47 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:13.513 14:41:47 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:13.513 14:41:47 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:13.513 14:41:47 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:13.513 14:41:47 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:13.513 14:41:47 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:13.513 00:07:13.513 real 0m1.340s 00:07:13.513 user 0m1.244s 00:07:13.514 sys 0m0.110s 00:07:13.514 14:41:47 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:13.514 14:41:47 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:07:13.514 ************************************ 00:07:13.514 END TEST accel_fill 00:07:13.514 ************************************ 00:07:13.514 14:41:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:13.514 14:41:47 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:13.514 14:41:47 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:13.514 14:41:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.514 14:41:47 accel -- common/autotest_common.sh@10 -- # set +x 00:07:13.514 ************************************ 00:07:13.514 START TEST accel_copy_crc32c 00:07:13.514 ************************************ 00:07:13.514 14:41:47 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:07:13.514 14:41:47 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:13.514 14:41:47 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:13.514 14:41:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:13.514 14:41:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:13.514 14:41:47 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:13.514 14:41:47 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:13.514 14:41:47 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:13.514 14:41:47 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:13.514 14:41:47 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:13.514 14:41:47 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.514 14:41:47 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.514 14:41:47 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:13.514 14:41:47 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:13.514 14:41:47 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:13.514 [2024-07-15 14:41:47.235442] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:07:13.514 [2024-07-15 14:41:47.235508] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2692754 ] 00:07:13.514 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.514 [2024-07-15 14:41:47.291008] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.514 [2024-07-15 14:41:47.363094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.514 14:41:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:13.514 14:41:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:13.514 14:41:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:13.514 14:41:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:13.514 14:41:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:13.514 14:41:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:13.514 14:41:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:13.514 14:41:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:13.514 14:41:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:13.514 14:41:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:13.514 14:41:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:13.514 14:41:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:13.514 14:41:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:13.514 14:41:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:13.514 14:41:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:13.514 14:41:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:13.514 14:41:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:13.514 14:41:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:13.514 14:41:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:13.514 14:41:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:13.514 14:41:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:13.514 14:41:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:13.514 14:41:47 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:13.514 14:41:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:13.514 14:41:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:13.514 14:41:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:07:13.514 14:41:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:13.514 14:41:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:13.514 14:41:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:13.514 14:41:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:13.514 14:41:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:13.514 14:41:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:13.514 14:41:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:13.514 14:41:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:13.514 14:41:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:13.514 14:41:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:13.514 14:41:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:13.514 14:41:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:13.514 14:41:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:13.514 14:41:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:13.514 14:41:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:13.514 14:41:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:07:13.514 14:41:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:13.514 14:41:47 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:13.514 14:41:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:13.514 14:41:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:13.514 14:41:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:13.514 14:41:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:13.514 14:41:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:13.514 14:41:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:13.514 14:41:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:13.514 14:41:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:13.514 14:41:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:13.514 14:41:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:13.514 14:41:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:07:13.514 14:41:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:13.514 14:41:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:13.514 14:41:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:13.514 14:41:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:13.514 14:41:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:13.514 14:41:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:13.514 14:41:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:13.514 14:41:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:13.514 14:41:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:13.514 14:41:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:13.514 14:41:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:13.514 14:41:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:13.514 14:41:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:13.514 14:41:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:13.514 14:41:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:13.514 14:41:47 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:13.514 14:41:47 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:13.514 14:41:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:13.514 14:41:47 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:14.890 14:41:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:14.890 14:41:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:14.890 14:41:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:14.890 14:41:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:14.891 14:41:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:14.891 14:41:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:14.891 14:41:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:14.891 14:41:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:14.891 14:41:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:14.891 14:41:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:14.891 14:41:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:14.891 14:41:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:14.891 14:41:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:14.891 14:41:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:14.891 14:41:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:14.891 14:41:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:14.891 14:41:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:14.891 14:41:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:14.891 14:41:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:14.891 14:41:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:14.891 14:41:48 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:14.891 14:41:48 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:14.891 14:41:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:14.891 14:41:48 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:14.891 14:41:48 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:14.891 14:41:48 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:14.891 14:41:48 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:14.891 00:07:14.891 real 0m1.336s 00:07:14.891 user 0m1.244s 00:07:14.891 sys 0m0.107s 00:07:14.891 14:41:48 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:14.891 14:41:48 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:14.891 ************************************ 00:07:14.891 END TEST accel_copy_crc32c 00:07:14.891 ************************************ 00:07:14.891 14:41:48 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:14.891 14:41:48 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:14.891 14:41:48 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:14.891 14:41:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.891 14:41:48 accel -- common/autotest_common.sh@10 -- # set +x 00:07:14.891 ************************************ 00:07:14.891 START TEST accel_copy_crc32c_C2 00:07:14.891 ************************************ 00:07:14.891 14:41:48 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:14.891 14:41:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:14.891 14:41:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:14.891 14:41:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:14.891 14:41:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:14.891 14:41:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:14.891 14:41:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:14.891 14:41:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:14.891 14:41:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:14.891 14:41:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:14.891 14:41:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.891 14:41:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.891 14:41:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:14.891 14:41:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:14.891 14:41:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:14.891 [2024-07-15 14:41:48.632083] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:07:14.891 [2024-07-15 14:41:48.632134] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2693021 ] 00:07:14.891 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.891 [2024-07-15 14:41:48.685988] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.891 [2024-07-15 14:41:48.758136] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.891 14:41:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:14.891 14:41:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:14.891 14:41:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:14.891 14:41:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:14.891 14:41:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:14.891 14:41:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:14.891 14:41:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:14.891 14:41:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:14.891 14:41:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:14.891 14:41:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:14.891 14:41:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:14.891 14:41:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:14.891 14:41:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:14.891 14:41:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:14.891 14:41:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:14.891 14:41:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:14.891 14:41:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:14.891 14:41:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:14.891 14:41:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:14.891 14:41:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:14.891 14:41:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:14.891 14:41:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:14.891 14:41:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:14.891 14:41:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:14.891 14:41:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:14.891 14:41:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:14.891 14:41:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:14.891 14:41:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:14.891 14:41:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:14.891 14:41:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:14.891 14:41:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:14.891 14:41:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:14.891 14:41:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:14.891 14:41:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:14.891 14:41:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:14.891 14:41:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:14.891 14:41:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:14.891 14:41:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:14.891 14:41:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:14.891 14:41:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:14.891 14:41:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:14.891 14:41:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:15.196 14:41:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.196 14:41:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:15.196 14:41:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:15.196 14:41:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:15.196 14:41:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:15.196 14:41:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.196 14:41:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:15.196 14:41:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:15.196 14:41:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:15.196 14:41:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.196 14:41:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:15.196 14:41:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:15.196 14:41:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:15.196 14:41:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.196 14:41:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:15.196 14:41:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:15.196 14:41:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:15.196 14:41:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.196 14:41:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:15.196 14:41:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:15.196 14:41:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:15.196 14:41:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.196 14:41:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:15.196 14:41:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:15.196 14:41:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:15.196 14:41:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.196 14:41:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:15.196 14:41:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:15.196 14:41:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:15.196 14:41:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.196 14:41:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:15.196 14:41:48 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:16.198 14:41:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:16.198 14:41:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:16.198 14:41:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:16.198 14:41:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:16.198 14:41:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:16.198 14:41:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:16.198 14:41:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:16.198 14:41:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:16.198 14:41:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:16.198 14:41:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:16.198 14:41:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:16.198 14:41:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:16.198 14:41:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:16.198 14:41:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:16.198 14:41:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:16.198 14:41:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:16.198 14:41:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:16.198 14:41:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:16.198 14:41:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:16.198 14:41:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:16.198 14:41:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:16.198 14:41:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:16.198 14:41:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:16.198 14:41:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:16.198 14:41:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:16.198 14:41:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:16.198 14:41:49 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:16.198 00:07:16.198 real 0m1.335s 00:07:16.198 user 0m1.232s 00:07:16.198 sys 0m0.116s 00:07:16.198 14:41:49 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:16.198 14:41:49 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:16.198 ************************************ 00:07:16.198 END TEST accel_copy_crc32c_C2 00:07:16.198 ************************************ 00:07:16.198 14:41:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:16.198 14:41:49 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:16.198 14:41:49 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:16.198 14:41:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.198 14:41:49 accel -- common/autotest_common.sh@10 -- # set +x 00:07:16.198 ************************************ 00:07:16.198 START TEST accel_dualcast 00:07:16.198 ************************************ 00:07:16.198 14:41:50 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:07:16.198 14:41:50 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:07:16.198 14:41:50 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:07:16.198 14:41:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:16.198 14:41:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:16.198 14:41:50 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:16.198 14:41:50 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:16.198 14:41:50 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:07:16.198 14:41:50 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:16.198 14:41:50 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:16.198 14:41:50 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.198 14:41:50 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.198 14:41:50 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:16.198 14:41:50 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:07:16.198 14:41:50 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:07:16.198 [2024-07-15 14:41:50.037036] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:07:16.198 [2024-07-15 14:41:50.037086] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2693276 ] 00:07:16.198 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.198 [2024-07-15 14:41:50.092952] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.456 [2024-07-15 14:41:50.167217] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.456 14:41:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:16.456 14:41:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:16.456 14:41:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:16.456 14:41:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:16.456 14:41:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:16.456 14:41:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:16.456 14:41:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:16.456 14:41:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:16.456 14:41:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:16.456 14:41:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:16.456 14:41:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:16.456 14:41:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:16.456 14:41:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:16.456 14:41:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:16.456 14:41:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:16.456 14:41:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:16.456 14:41:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:16.456 14:41:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:16.456 14:41:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:16.456 14:41:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:16.456 14:41:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:16.456 14:41:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:16.456 14:41:50 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:16.456 14:41:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:16.456 14:41:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:16.456 14:41:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:16.456 14:41:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:16.456 14:41:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:16.456 14:41:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:16.456 14:41:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:16.456 14:41:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:16.456 14:41:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:16.456 14:41:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:16.456 14:41:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:16.456 14:41:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:16.456 14:41:50 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:16.456 14:41:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:16.456 14:41:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:16.456 14:41:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:16.456 14:41:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:16.456 14:41:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:16.456 14:41:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:16.456 14:41:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:16.456 14:41:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:16.456 14:41:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:16.456 14:41:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:16.456 14:41:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:16.456 14:41:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:16.456 14:41:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:16.456 14:41:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:16.456 14:41:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:16.456 14:41:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:16.456 14:41:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:16.456 14:41:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:16.456 14:41:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:16.456 14:41:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:16.456 14:41:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:16.456 14:41:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:16.456 14:41:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:16.456 14:41:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:16.456 14:41:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:16.456 14:41:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:16.457 14:41:50 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:16.457 14:41:50 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:16.457 14:41:50 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:16.457 14:41:50 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:17.828 14:41:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:17.828 14:41:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:17.828 14:41:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:17.828 14:41:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:17.828 14:41:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:17.828 14:41:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:17.828 14:41:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:17.828 14:41:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:17.828 14:41:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:17.828 14:41:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:17.828 14:41:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:17.828 14:41:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:17.828 14:41:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:17.828 14:41:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:17.828 14:41:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:17.828 14:41:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:17.828 14:41:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:17.828 14:41:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:17.828 14:41:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:17.828 14:41:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:17.829 14:41:51 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:17.829 14:41:51 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:17.829 14:41:51 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:17.829 14:41:51 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:17.829 14:41:51 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:17.829 14:41:51 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:17.829 14:41:51 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:17.829 00:07:17.829 real 0m1.340s 00:07:17.829 user 0m1.243s 00:07:17.829 sys 0m0.110s 00:07:17.829 14:41:51 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:17.829 14:41:51 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:17.829 ************************************ 00:07:17.829 END TEST accel_dualcast 00:07:17.829 ************************************ 00:07:17.829 14:41:51 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:17.829 14:41:51 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:17.829 14:41:51 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:17.829 14:41:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.829 14:41:51 accel -- common/autotest_common.sh@10 -- # set +x 00:07:17.829 ************************************ 00:07:17.829 START TEST accel_compare 00:07:17.829 ************************************ 00:07:17.829 14:41:51 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:07:17.829 14:41:51 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:17.829 14:41:51 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:17.829 14:41:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:17.829 14:41:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:17.829 14:41:51 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:17.829 14:41:51 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:17.829 14:41:51 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:17.829 14:41:51 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:17.829 14:41:51 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:17.829 14:41:51 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.829 14:41:51 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.829 14:41:51 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:17.829 14:41:51 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:17.829 14:41:51 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:17.829 [2024-07-15 14:41:51.442960] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:07:17.829 [2024-07-15 14:41:51.443034] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2693542 ] 00:07:17.829 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.829 [2024-07-15 14:41:51.498519] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.829 [2024-07-15 14:41:51.570629] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.829 14:41:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:17.829 14:41:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:17.829 14:41:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:17.829 14:41:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:17.829 14:41:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:17.829 14:41:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:17.829 14:41:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:17.829 14:41:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:17.829 14:41:51 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:17.829 14:41:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:17.829 14:41:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:17.829 14:41:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:17.829 14:41:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:17.829 14:41:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:17.829 14:41:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:17.829 14:41:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:17.829 14:41:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:17.829 14:41:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:17.829 14:41:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:17.829 14:41:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:17.829 14:41:51 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:17.829 14:41:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:17.829 14:41:51 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:17.829 14:41:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:17.829 14:41:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:17.829 14:41:51 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:17.829 14:41:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:17.829 14:41:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:17.829 14:41:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:17.829 14:41:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:17.829 14:41:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:17.829 14:41:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:17.829 14:41:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:17.829 14:41:51 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:17.829 14:41:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:17.829 14:41:51 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:17.829 14:41:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:17.829 14:41:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:17.829 14:41:51 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:17.829 14:41:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:17.829 14:41:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:17.829 14:41:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:17.829 14:41:51 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:17.829 14:41:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:17.829 14:41:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:17.829 14:41:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:17.829 14:41:51 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:17.829 14:41:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:17.829 14:41:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:17.829 14:41:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:17.829 14:41:51 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:17.829 14:41:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:17.829 14:41:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:17.829 14:41:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:17.829 14:41:51 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:17.829 14:41:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:17.829 14:41:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:17.829 14:41:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:17.829 14:41:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:17.829 14:41:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:17.829 14:41:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:17.829 14:41:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:17.829 14:41:51 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:17.829 14:41:51 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:17.829 14:41:51 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:17.829 14:41:51 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:19.203 14:41:52 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:19.203 14:41:52 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:19.203 14:41:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:19.203 14:41:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:19.203 14:41:52 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:19.203 14:41:52 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:19.203 14:41:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:19.203 14:41:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:19.203 14:41:52 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:19.203 14:41:52 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:19.203 14:41:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:19.203 14:41:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:19.203 14:41:52 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:19.203 14:41:52 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:19.203 14:41:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:19.203 14:41:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:19.203 14:41:52 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:19.203 14:41:52 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:19.203 14:41:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:19.203 14:41:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:19.203 14:41:52 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:19.203 14:41:52 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:19.203 14:41:52 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:19.203 14:41:52 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:19.203 14:41:52 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:19.203 14:41:52 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:19.203 14:41:52 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:19.203 00:07:19.203 real 0m1.336s 00:07:19.203 user 0m1.235s 00:07:19.203 sys 0m0.113s 00:07:19.203 14:41:52 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:19.203 14:41:52 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:19.203 ************************************ 00:07:19.203 END TEST accel_compare 00:07:19.203 ************************************ 00:07:19.203 14:41:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:19.203 14:41:52 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:19.203 14:41:52 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:19.203 14:41:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.203 14:41:52 accel -- common/autotest_common.sh@10 -- # set +x 00:07:19.203 ************************************ 00:07:19.203 START TEST accel_xor 00:07:19.203 ************************************ 00:07:19.203 14:41:52 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:07:19.203 14:41:52 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:19.203 14:41:52 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:19.203 14:41:52 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:19.203 14:41:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.203 14:41:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.203 14:41:52 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:19.203 14:41:52 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:19.203 14:41:52 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:19.203 14:41:52 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:19.203 14:41:52 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.203 14:41:52 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.203 14:41:52 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:19.203 14:41:52 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:19.203 14:41:52 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:19.203 [2024-07-15 14:41:52.828517] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:07:19.203 [2024-07-15 14:41:52.828562] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2693805 ] 00:07:19.203 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.203 [2024-07-15 14:41:52.882459] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.203 [2024-07-15 14:41:52.954508] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.203 14:41:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:19.203 14:41:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.203 14:41:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.203 14:41:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.203 14:41:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:19.203 14:41:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.203 14:41:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.203 14:41:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.203 14:41:52 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:19.203 14:41:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.203 14:41:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.203 14:41:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.203 14:41:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:19.203 14:41:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.203 14:41:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.203 14:41:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.203 14:41:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:19.203 14:41:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.203 14:41:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.203 14:41:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.203 14:41:52 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:19.203 14:41:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.203 14:41:52 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:19.203 14:41:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.203 14:41:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.203 14:41:53 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:19.203 14:41:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.203 14:41:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.203 14:41:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.203 14:41:53 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:19.203 14:41:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.203 14:41:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.203 14:41:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.203 14:41:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:19.203 14:41:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.203 14:41:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.203 14:41:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.203 14:41:53 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:19.203 14:41:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.203 14:41:53 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:19.203 14:41:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.203 14:41:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.203 14:41:53 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:19.203 14:41:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.203 14:41:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.203 14:41:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.203 14:41:53 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:19.203 14:41:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.203 14:41:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.204 14:41:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.204 14:41:53 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:19.204 14:41:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.204 14:41:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.204 14:41:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.204 14:41:53 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:19.204 14:41:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.204 14:41:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.204 14:41:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.204 14:41:53 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:19.204 14:41:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.204 14:41:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.204 14:41:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.204 14:41:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:19.204 14:41:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.204 14:41:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.204 14:41:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:19.204 14:41:53 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:19.204 14:41:53 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:19.204 14:41:53 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:19.204 14:41:53 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.578 14:41:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:20.578 14:41:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.578 14:41:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.578 14:41:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.578 14:41:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:20.578 14:41:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.578 14:41:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.578 14:41:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.578 14:41:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:20.578 14:41:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.578 14:41:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.578 14:41:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.578 14:41:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:20.578 14:41:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.578 14:41:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.578 14:41:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.578 14:41:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:20.578 14:41:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.578 14:41:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.578 14:41:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.578 14:41:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:20.578 14:41:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.578 14:41:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.578 14:41:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.578 14:41:54 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:20.578 14:41:54 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:20.578 14:41:54 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:20.578 00:07:20.578 real 0m1.321s 00:07:20.578 user 0m1.230s 00:07:20.578 sys 0m0.106s 00:07:20.578 14:41:54 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:20.578 14:41:54 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:20.578 ************************************ 00:07:20.578 END TEST accel_xor 00:07:20.578 ************************************ 00:07:20.578 14:41:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:20.578 14:41:54 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:20.578 14:41:54 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:20.578 14:41:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.578 14:41:54 accel -- common/autotest_common.sh@10 -- # set +x 00:07:20.578 ************************************ 00:07:20.578 START TEST accel_xor 00:07:20.578 ************************************ 00:07:20.578 14:41:54 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:07:20.578 14:41:54 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:20.578 14:41:54 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:20.578 14:41:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.578 14:41:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.578 14:41:54 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:20.578 14:41:54 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:20.578 14:41:54 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:20.578 14:41:54 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:20.579 14:41:54 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:20.579 14:41:54 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.579 14:41:54 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.579 14:41:54 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:20.579 14:41:54 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:20.579 14:41:54 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:20.579 [2024-07-15 14:41:54.226426] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:07:20.579 [2024-07-15 14:41:54.226474] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2694051 ] 00:07:20.579 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.579 [2024-07-15 14:41:54.281665] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.579 [2024-07-15 14:41:54.353600] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.579 14:41:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:20.579 14:41:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.579 14:41:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.579 14:41:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.579 14:41:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:20.579 14:41:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.579 14:41:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.579 14:41:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.579 14:41:54 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:20.579 14:41:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.579 14:41:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.579 14:41:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.579 14:41:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:20.579 14:41:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.579 14:41:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.579 14:41:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.579 14:41:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:20.579 14:41:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.579 14:41:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.579 14:41:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.579 14:41:54 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:20.579 14:41:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.579 14:41:54 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:20.579 14:41:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.579 14:41:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.579 14:41:54 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:20.579 14:41:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.579 14:41:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.579 14:41:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.579 14:41:54 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:20.579 14:41:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.579 14:41:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.579 14:41:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.579 14:41:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:20.579 14:41:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.579 14:41:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.579 14:41:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.579 14:41:54 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:20.579 14:41:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.579 14:41:54 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:20.579 14:41:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.579 14:41:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.579 14:41:54 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:20.579 14:41:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.579 14:41:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.579 14:41:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.579 14:41:54 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:20.579 14:41:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.579 14:41:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.579 14:41:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.579 14:41:54 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:20.579 14:41:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.579 14:41:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.579 14:41:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.579 14:41:54 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:20.579 14:41:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.579 14:41:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.579 14:41:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.579 14:41:54 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:20.579 14:41:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.579 14:41:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.579 14:41:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.579 14:41:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:20.579 14:41:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.579 14:41:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.579 14:41:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:20.579 14:41:54 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:20.579 14:41:54 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:20.579 14:41:54 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:20.579 14:41:54 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:21.956 14:41:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:21.956 14:41:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:21.956 14:41:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:21.956 14:41:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:21.956 14:41:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:21.956 14:41:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:21.956 14:41:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:21.956 14:41:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:21.956 14:41:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:21.956 14:41:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:21.956 14:41:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:21.956 14:41:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:21.956 14:41:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:21.956 14:41:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:21.956 14:41:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:21.956 14:41:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:21.956 14:41:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:21.956 14:41:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:21.956 14:41:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:21.956 14:41:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:21.956 14:41:55 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:21.956 14:41:55 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:21.956 14:41:55 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:21.956 14:41:55 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:21.956 14:41:55 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:21.956 14:41:55 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:21.956 14:41:55 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:21.956 00:07:21.956 real 0m1.335s 00:07:21.956 user 0m1.230s 00:07:21.956 sys 0m0.118s 00:07:21.956 14:41:55 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:21.956 14:41:55 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:21.956 ************************************ 00:07:21.956 END TEST accel_xor 00:07:21.956 ************************************ 00:07:21.956 14:41:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:21.956 14:41:55 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:21.956 14:41:55 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:21.956 14:41:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.956 14:41:55 accel -- common/autotest_common.sh@10 -- # set +x 00:07:21.956 ************************************ 00:07:21.956 START TEST accel_dif_verify 00:07:21.956 ************************************ 00:07:21.956 14:41:55 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:07:21.956 14:41:55 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:21.956 14:41:55 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:21.956 14:41:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:21.956 14:41:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:21.956 14:41:55 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:21.956 14:41:55 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:21.956 14:41:55 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:21.956 14:41:55 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:21.956 14:41:55 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:21.956 14:41:55 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.956 14:41:55 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.956 14:41:55 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:21.956 14:41:55 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:21.956 14:41:55 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:21.956 [2024-07-15 14:41:55.627497] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:07:21.956 [2024-07-15 14:41:55.627573] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2694315 ] 00:07:21.956 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.956 [2024-07-15 14:41:55.684103] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.956 [2024-07-15 14:41:55.756611] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.956 14:41:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:21.956 14:41:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:21.956 14:41:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:21.956 14:41:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:21.956 14:41:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:21.956 14:41:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:21.956 14:41:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:21.956 14:41:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:21.956 14:41:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:21.956 14:41:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:21.956 14:41:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:21.956 14:41:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:21.956 14:41:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:21.956 14:41:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:21.956 14:41:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:21.956 14:41:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:21.956 14:41:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:21.956 14:41:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:21.956 14:41:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:21.956 14:41:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:21.956 14:41:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:21.956 14:41:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:21.956 14:41:55 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:21.956 14:41:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:21.956 14:41:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:21.956 14:41:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:21.956 14:41:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:21.956 14:41:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:21.956 14:41:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:21.956 14:41:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:21.956 14:41:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:21.956 14:41:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:21.956 14:41:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:21.956 14:41:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:21.956 14:41:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:21.956 14:41:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:21.956 14:41:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:21.957 14:41:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:21.957 14:41:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:21.957 14:41:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:21.957 14:41:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:21.957 14:41:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:21.957 14:41:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:21.957 14:41:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:21.957 14:41:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:21.957 14:41:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:21.957 14:41:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:21.957 14:41:55 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:21.957 14:41:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:21.957 14:41:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:21.957 14:41:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:21.957 14:41:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:21.957 14:41:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:21.957 14:41:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:21.957 14:41:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:21.957 14:41:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:21.957 14:41:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:21.957 14:41:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:21.957 14:41:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:21.957 14:41:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:21.957 14:41:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:21.957 14:41:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:21.957 14:41:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:21.957 14:41:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:21.957 14:41:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:21.957 14:41:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:21.957 14:41:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:21.957 14:41:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:21.957 14:41:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:21.957 14:41:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:21.957 14:41:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:21.957 14:41:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:21.957 14:41:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:21.957 14:41:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:21.957 14:41:55 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:21.957 14:41:55 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:21.957 14:41:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:21.957 14:41:55 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:23.328 14:41:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:23.329 14:41:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:23.329 14:41:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:23.329 14:41:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:23.329 14:41:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:23.329 14:41:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:23.329 14:41:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:23.329 14:41:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:23.329 14:41:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:23.329 14:41:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:23.329 14:41:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:23.329 14:41:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:23.329 14:41:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:23.329 14:41:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:23.329 14:41:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:23.329 14:41:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:23.329 14:41:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:23.329 14:41:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:23.329 14:41:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:23.329 14:41:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:23.329 14:41:56 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:23.329 14:41:56 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:23.329 14:41:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:23.329 14:41:56 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:23.329 14:41:56 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:23.329 14:41:56 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:23.329 14:41:56 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:23.329 00:07:23.329 real 0m1.338s 00:07:23.329 user 0m1.242s 00:07:23.329 sys 0m0.111s 00:07:23.329 14:41:56 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:23.329 14:41:56 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:23.329 ************************************ 00:07:23.329 END TEST accel_dif_verify 00:07:23.329 ************************************ 00:07:23.329 14:41:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:23.329 14:41:56 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:23.329 14:41:56 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:23.329 14:41:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:23.329 14:41:56 accel -- common/autotest_common.sh@10 -- # set +x 00:07:23.329 ************************************ 00:07:23.329 START TEST accel_dif_generate 00:07:23.329 ************************************ 00:07:23.329 14:41:57 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:07:23.329 14:41:57 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:23.329 14:41:57 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:23.329 14:41:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:23.329 14:41:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:23.329 14:41:57 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:23.329 14:41:57 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:23.329 14:41:57 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:23.329 14:41:57 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:23.329 14:41:57 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:23.329 14:41:57 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.329 14:41:57 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.329 14:41:57 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:23.329 14:41:57 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:23.329 14:41:57 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:23.329 [2024-07-15 14:41:57.031577] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:07:23.329 [2024-07-15 14:41:57.031644] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2694585 ] 00:07:23.329 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.329 [2024-07-15 14:41:57.087805] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.329 [2024-07-15 14:41:57.158946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.329 14:41:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:23.329 14:41:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:23.329 14:41:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:23.329 14:41:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:23.329 14:41:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:23.329 14:41:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:23.329 14:41:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:23.329 14:41:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:23.329 14:41:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:23.329 14:41:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:23.329 14:41:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:23.329 14:41:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:23.329 14:41:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:23.329 14:41:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:23.329 14:41:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:23.329 14:41:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:23.329 14:41:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:23.329 14:41:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:23.329 14:41:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:23.329 14:41:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:23.329 14:41:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:23.329 14:41:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:23.329 14:41:57 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:23.329 14:41:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:23.329 14:41:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:23.329 14:41:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:23.329 14:41:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:23.329 14:41:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:23.329 14:41:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:23.329 14:41:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:23.329 14:41:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:23.329 14:41:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:23.329 14:41:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:23.329 14:41:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:23.329 14:41:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:23.329 14:41:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:23.329 14:41:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:23.329 14:41:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:23.329 14:41:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:23.329 14:41:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:23.329 14:41:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:23.329 14:41:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:23.329 14:41:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:23.329 14:41:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:23.329 14:41:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:23.329 14:41:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:23.329 14:41:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:23.329 14:41:57 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:23.329 14:41:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:23.329 14:41:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:23.329 14:41:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:23.329 14:41:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:23.329 14:41:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:23.329 14:41:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:23.329 14:41:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:23.329 14:41:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:23.329 14:41:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:23.329 14:41:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:23.329 14:41:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:23.329 14:41:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:23.329 14:41:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:23.329 14:41:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:23.329 14:41:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:23.329 14:41:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:23.329 14:41:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:23.329 14:41:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:23.329 14:41:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:23.329 14:41:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:23.329 14:41:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:23.329 14:41:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:23.329 14:41:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:23.329 14:41:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:23.329 14:41:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:23.329 14:41:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:23.329 14:41:57 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:23.329 14:41:57 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:23.329 14:41:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:23.329 14:41:57 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:24.755 14:41:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:24.755 14:41:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:24.755 14:41:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:24.755 14:41:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:24.755 14:41:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:24.755 14:41:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:24.755 14:41:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:24.755 14:41:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:24.755 14:41:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:24.755 14:41:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:24.755 14:41:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:24.755 14:41:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:24.755 14:41:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:24.755 14:41:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:24.755 14:41:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:24.755 14:41:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:24.755 14:41:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:24.755 14:41:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:24.755 14:41:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:24.755 14:41:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:24.755 14:41:58 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:24.755 14:41:58 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:24.755 14:41:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:24.755 14:41:58 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:24.755 14:41:58 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:24.755 14:41:58 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:24.755 14:41:58 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:24.755 00:07:24.756 real 0m1.335s 00:07:24.756 user 0m1.243s 00:07:24.756 sys 0m0.109s 00:07:24.756 14:41:58 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:24.756 14:41:58 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:24.756 ************************************ 00:07:24.756 END TEST accel_dif_generate 00:07:24.756 ************************************ 00:07:24.756 14:41:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:24.756 14:41:58 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:24.756 14:41:58 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:24.756 14:41:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:24.756 14:41:58 accel -- common/autotest_common.sh@10 -- # set +x 00:07:24.756 ************************************ 00:07:24.756 START TEST accel_dif_generate_copy 00:07:24.756 ************************************ 00:07:24.756 14:41:58 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:07:24.756 14:41:58 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:24.756 14:41:58 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:24.756 14:41:58 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:24.756 14:41:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.756 14:41:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.756 14:41:58 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:24.756 14:41:58 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:24.756 14:41:58 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:24.756 14:41:58 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:24.756 14:41:58 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:24.756 14:41:58 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:24.756 14:41:58 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:24.756 14:41:58 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:24.756 14:41:58 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:24.756 [2024-07-15 14:41:58.415548] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:07:24.756 [2024-07-15 14:41:58.415585] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2694845 ] 00:07:24.756 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.756 [2024-07-15 14:41:58.466884] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.756 [2024-07-15 14:41:58.537731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.756 14:41:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:24.756 14:41:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.756 14:41:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.756 14:41:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.756 14:41:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:24.756 14:41:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.756 14:41:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.756 14:41:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.756 14:41:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:24.756 14:41:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.756 14:41:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.756 14:41:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.756 14:41:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:24.756 14:41:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.756 14:41:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.756 14:41:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.756 14:41:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:24.756 14:41:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.756 14:41:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.756 14:41:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.756 14:41:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:24.756 14:41:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.756 14:41:58 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:24.756 14:41:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.756 14:41:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.756 14:41:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:24.756 14:41:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.756 14:41:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.756 14:41:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.756 14:41:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:24.756 14:41:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.756 14:41:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.756 14:41:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.756 14:41:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:24.756 14:41:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.756 14:41:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.756 14:41:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.756 14:41:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:24.756 14:41:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.756 14:41:58 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:24.756 14:41:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.756 14:41:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.756 14:41:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:24.756 14:41:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.756 14:41:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.756 14:41:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.756 14:41:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:24.756 14:41:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.756 14:41:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.756 14:41:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.756 14:41:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:24.756 14:41:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.756 14:41:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.756 14:41:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.756 14:41:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:24.756 14:41:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.756 14:41:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.756 14:41:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.756 14:41:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:24.756 14:41:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.756 14:41:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.756 14:41:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.756 14:41:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:24.756 14:41:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.756 14:41:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.756 14:41:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:24.756 14:41:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:24.756 14:41:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:24.756 14:41:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:24.756 14:41:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.127 14:41:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:26.127 14:41:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.127 14:41:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.127 14:41:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.127 14:41:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:26.127 14:41:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.127 14:41:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.127 14:41:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.127 14:41:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:26.127 14:41:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.127 14:41:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.127 14:41:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.127 14:41:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:26.127 14:41:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.127 14:41:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.127 14:41:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.127 14:41:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:26.127 14:41:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.127 14:41:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.127 14:41:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.127 14:41:59 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:26.127 14:41:59 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.127 14:41:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.127 14:41:59 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.127 14:41:59 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:26.127 14:41:59 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:26.127 14:41:59 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:26.127 00:07:26.127 real 0m1.319s 00:07:26.127 user 0m1.229s 00:07:26.127 sys 0m0.105s 00:07:26.127 14:41:59 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:26.127 14:41:59 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:26.127 ************************************ 00:07:26.127 END TEST accel_dif_generate_copy 00:07:26.127 ************************************ 00:07:26.127 14:41:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:26.127 14:41:59 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:26.127 14:41:59 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:26.127 14:41:59 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:26.127 14:41:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:26.127 14:41:59 accel -- common/autotest_common.sh@10 -- # set +x 00:07:26.127 ************************************ 00:07:26.127 START TEST accel_comp 00:07:26.127 ************************************ 00:07:26.127 14:41:59 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:26.127 14:41:59 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:26.127 14:41:59 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:26.127 14:41:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:26.127 14:41:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:26.127 14:41:59 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:26.127 14:41:59 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:26.127 14:41:59 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:26.127 14:41:59 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:26.127 14:41:59 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:26.127 14:41:59 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.127 14:41:59 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.128 14:41:59 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:26.128 14:41:59 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:26.128 14:41:59 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:26.128 [2024-07-15 14:41:59.815534] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:07:26.128 [2024-07-15 14:41:59.815605] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2695122 ] 00:07:26.128 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.128 [2024-07-15 14:41:59.873946] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.128 [2024-07-15 14:41:59.945166] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.128 14:41:59 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:26.128 14:41:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.128 14:41:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:26.128 14:41:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:26.128 14:41:59 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:26.128 14:41:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.128 14:41:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:26.128 14:41:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:26.128 14:41:59 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:26.128 14:41:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.128 14:41:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:26.128 14:41:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:26.128 14:41:59 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:26.128 14:41:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.128 14:41:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:26.128 14:41:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:26.128 14:41:59 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:26.128 14:41:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.128 14:41:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:26.128 14:41:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:26.128 14:41:59 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:26.128 14:41:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.128 14:41:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:26.128 14:41:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:26.128 14:41:59 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:26.128 14:41:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.128 14:41:59 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:26.128 14:41:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:26.128 14:41:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:26.128 14:41:59 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:26.128 14:41:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.128 14:41:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:26.128 14:41:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:26.128 14:41:59 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:26.128 14:41:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.128 14:41:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:26.128 14:41:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:26.128 14:41:59 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:26.128 14:41:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.128 14:41:59 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:26.128 14:41:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:26.128 14:41:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:26.128 14:41:59 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:26.128 14:41:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.128 14:41:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:26.128 14:41:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:26.128 14:41:59 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:26.128 14:41:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.128 14:42:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:26.128 14:42:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:26.128 14:42:00 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:26.128 14:42:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.128 14:42:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:26.128 14:42:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:26.128 14:42:00 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:26.128 14:42:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.128 14:42:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:26.128 14:42:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:26.128 14:42:00 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:26.128 14:42:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.128 14:42:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:26.128 14:42:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:26.128 14:42:00 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:26.128 14:42:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.128 14:42:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:26.128 14:42:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:26.128 14:42:00 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:26.128 14:42:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.128 14:42:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:26.128 14:42:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:26.128 14:42:00 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:26.128 14:42:00 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:26.128 14:42:00 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:26.128 14:42:00 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:27.504 14:42:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:27.504 14:42:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.504 14:42:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:27.504 14:42:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:27.504 14:42:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:27.504 14:42:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.504 14:42:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:27.504 14:42:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:27.504 14:42:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:27.504 14:42:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.504 14:42:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:27.504 14:42:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:27.504 14:42:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:27.504 14:42:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.504 14:42:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:27.504 14:42:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:27.504 14:42:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:27.504 14:42:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.504 14:42:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:27.504 14:42:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:27.504 14:42:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:27.504 14:42:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.504 14:42:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:27.504 14:42:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:27.504 14:42:01 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:27.504 14:42:01 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:27.504 14:42:01 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:27.504 00:07:27.504 real 0m1.342s 00:07:27.504 user 0m1.237s 00:07:27.504 sys 0m0.117s 00:07:27.504 14:42:01 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:27.504 14:42:01 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:27.504 ************************************ 00:07:27.504 END TEST accel_comp 00:07:27.504 ************************************ 00:07:27.504 14:42:01 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:27.504 14:42:01 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:27.504 14:42:01 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:27.504 14:42:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:27.504 14:42:01 accel -- common/autotest_common.sh@10 -- # set +x 00:07:27.504 ************************************ 00:07:27.504 START TEST accel_decomp 00:07:27.504 ************************************ 00:07:27.504 14:42:01 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:27.504 14:42:01 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:27.504 14:42:01 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:27.504 14:42:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:27.504 14:42:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:27.504 14:42:01 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:27.504 14:42:01 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:27.504 14:42:01 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:27.504 14:42:01 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:27.504 14:42:01 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:27.504 14:42:01 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.504 14:42:01 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.504 14:42:01 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:27.504 14:42:01 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:27.504 14:42:01 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:27.504 [2024-07-15 14:42:01.217690] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:07:27.504 [2024-07-15 14:42:01.217739] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2695443 ] 00:07:27.504 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.504 [2024-07-15 14:42:01.272665] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.504 [2024-07-15 14:42:01.344467] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.504 14:42:01 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:27.504 14:42:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.504 14:42:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:27.504 14:42:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:27.504 14:42:01 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:27.504 14:42:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.504 14:42:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:27.504 14:42:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:27.504 14:42:01 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:27.504 14:42:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.504 14:42:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:27.504 14:42:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:27.504 14:42:01 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:27.504 14:42:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.504 14:42:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:27.504 14:42:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:27.504 14:42:01 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:27.504 14:42:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.504 14:42:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:27.504 14:42:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:27.505 14:42:01 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:27.505 14:42:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.505 14:42:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:27.505 14:42:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:27.505 14:42:01 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:27.505 14:42:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.505 14:42:01 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:27.505 14:42:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:27.505 14:42:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:27.505 14:42:01 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:27.505 14:42:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.505 14:42:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:27.505 14:42:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:27.505 14:42:01 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:27.505 14:42:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.505 14:42:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:27.505 14:42:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:27.505 14:42:01 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:27.505 14:42:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.505 14:42:01 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:27.505 14:42:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:27.505 14:42:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:27.505 14:42:01 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:27.505 14:42:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.505 14:42:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:27.505 14:42:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:27.505 14:42:01 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:27.505 14:42:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.505 14:42:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:27.505 14:42:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:27.505 14:42:01 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:27.505 14:42:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.505 14:42:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:27.505 14:42:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:27.505 14:42:01 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:27.505 14:42:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.505 14:42:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:27.505 14:42:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:27.505 14:42:01 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:27.505 14:42:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.505 14:42:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:27.505 14:42:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:27.505 14:42:01 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:27.505 14:42:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.505 14:42:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:27.505 14:42:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:27.505 14:42:01 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:27.505 14:42:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.505 14:42:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:27.505 14:42:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:27.505 14:42:01 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:27.505 14:42:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:27.505 14:42:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:27.505 14:42:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:28.882 14:42:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:28.882 14:42:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:28.882 14:42:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:28.882 14:42:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:28.882 14:42:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:28.882 14:42:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:28.882 14:42:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:28.882 14:42:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:28.882 14:42:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:28.882 14:42:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:28.882 14:42:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:28.882 14:42:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:28.882 14:42:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:28.882 14:42:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:28.882 14:42:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:28.882 14:42:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:28.882 14:42:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:28.882 14:42:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:28.882 14:42:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:28.882 14:42:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:28.882 14:42:02 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:28.882 14:42:02 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:28.882 14:42:02 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:28.882 14:42:02 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:28.882 14:42:02 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:28.882 14:42:02 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:28.882 14:42:02 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:28.882 00:07:28.882 real 0m1.337s 00:07:28.882 user 0m1.239s 00:07:28.882 sys 0m0.111s 00:07:28.882 14:42:02 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:28.882 14:42:02 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:28.882 ************************************ 00:07:28.882 END TEST accel_decomp 00:07:28.882 ************************************ 00:07:28.882 14:42:02 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:28.882 14:42:02 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:28.882 14:42:02 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:28.882 14:42:02 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.882 14:42:02 accel -- common/autotest_common.sh@10 -- # set +x 00:07:28.882 ************************************ 00:07:28.882 START TEST accel_decomp_full 00:07:28.882 ************************************ 00:07:28.882 14:42:02 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:28.882 14:42:02 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:07:28.882 14:42:02 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:07:28.882 14:42:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:28.882 14:42:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:28.882 14:42:02 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:28.882 14:42:02 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:28.882 14:42:02 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:07:28.882 14:42:02 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:28.882 14:42:02 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:28.882 14:42:02 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.882 14:42:02 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.882 14:42:02 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:28.882 14:42:02 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:07:28.882 14:42:02 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:07:28.882 [2024-07-15 14:42:02.612833] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:07:28.882 [2024-07-15 14:42:02.612898] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2695742 ] 00:07:28.882 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.882 [2024-07-15 14:42:02.668067] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.882 [2024-07-15 14:42:02.740872] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.882 14:42:02 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:28.882 14:42:02 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:28.882 14:42:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:28.882 14:42:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:28.882 14:42:02 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:28.882 14:42:02 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:28.882 14:42:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:28.882 14:42:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:28.882 14:42:02 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:28.882 14:42:02 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:28.882 14:42:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:28.882 14:42:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:28.882 14:42:02 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:07:28.882 14:42:02 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:28.882 14:42:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:28.882 14:42:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:28.882 14:42:02 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:28.882 14:42:02 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:28.882 14:42:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:28.882 14:42:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:28.882 14:42:02 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:28.882 14:42:02 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:28.882 14:42:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:28.882 14:42:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:28.882 14:42:02 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:07:28.882 14:42:02 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:28.882 14:42:02 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:28.882 14:42:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:28.882 14:42:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:28.882 14:42:02 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:28.882 14:42:02 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:28.882 14:42:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:28.882 14:42:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:28.882 14:42:02 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:28.882 14:42:02 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:28.882 14:42:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:28.882 14:42:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:28.882 14:42:02 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:07:28.882 14:42:02 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:28.882 14:42:02 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:07:28.882 14:42:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:28.882 14:42:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:28.882 14:42:02 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:28.882 14:42:02 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:28.882 14:42:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:28.882 14:42:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:28.882 14:42:02 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:28.882 14:42:02 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:28.882 14:42:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:28.883 14:42:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:28.883 14:42:02 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:28.883 14:42:02 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:28.883 14:42:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:28.883 14:42:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:28.883 14:42:02 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:07:28.883 14:42:02 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:28.883 14:42:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:28.883 14:42:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:28.883 14:42:02 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:28.883 14:42:02 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:28.883 14:42:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:28.883 14:42:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:28.883 14:42:02 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:07:28.883 14:42:02 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:28.883 14:42:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:28.883 14:42:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:28.883 14:42:02 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:28.883 14:42:02 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:28.883 14:42:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:28.883 14:42:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:28.883 14:42:02 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:28.883 14:42:02 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:28.883 14:42:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:28.883 14:42:02 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:30.259 14:42:03 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:30.259 14:42:03 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:30.259 14:42:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:30.259 14:42:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:30.259 14:42:03 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:30.259 14:42:03 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:30.259 14:42:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:30.259 14:42:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:30.259 14:42:03 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:30.259 14:42:03 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:30.259 14:42:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:30.259 14:42:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:30.259 14:42:03 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:30.259 14:42:03 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:30.259 14:42:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:30.259 14:42:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:30.259 14:42:03 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:30.259 14:42:03 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:30.259 14:42:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:30.259 14:42:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:30.259 14:42:03 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:30.259 14:42:03 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:30.259 14:42:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:30.259 14:42:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:30.259 14:42:03 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:30.259 14:42:03 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:30.259 14:42:03 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:30.259 00:07:30.259 real 0m1.347s 00:07:30.259 user 0m1.243s 00:07:30.259 sys 0m0.116s 00:07:30.259 14:42:03 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:30.259 14:42:03 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:07:30.259 ************************************ 00:07:30.259 END TEST accel_decomp_full 00:07:30.259 ************************************ 00:07:30.259 14:42:03 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:30.259 14:42:03 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:30.259 14:42:03 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:30.259 14:42:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:30.259 14:42:03 accel -- common/autotest_common.sh@10 -- # set +x 00:07:30.259 ************************************ 00:07:30.259 START TEST accel_decomp_mcore 00:07:30.259 ************************************ 00:07:30.259 14:42:03 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:30.259 14:42:03 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:30.259 14:42:03 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:30.259 14:42:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.259 14:42:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.259 14:42:03 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:30.259 14:42:03 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:30.259 14:42:04 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:30.259 14:42:04 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:30.259 14:42:04 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:30.259 14:42:04 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:30.259 14:42:04 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:30.259 14:42:04 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:30.259 14:42:04 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:30.259 14:42:04 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:30.259 [2024-07-15 14:42:04.023699] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:07:30.259 [2024-07-15 14:42:04.023754] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2695995 ] 00:07:30.259 EAL: No free 2048 kB hugepages reported on node 1 00:07:30.259 [2024-07-15 14:42:04.080223] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:30.259 [2024-07-15 14:42:04.153752] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:30.259 [2024-07-15 14:42:04.153863] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:30.259 [2024-07-15 14:42:04.153952] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:30.259 [2024-07-15 14:42:04.153958] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.518 14:42:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:30.518 14:42:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.518 14:42:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.518 14:42:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.518 14:42:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:30.518 14:42:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.518 14:42:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.518 14:42:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.518 14:42:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:30.518 14:42:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.518 14:42:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.518 14:42:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.518 14:42:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:30.518 14:42:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.518 14:42:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.518 14:42:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.518 14:42:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:30.518 14:42:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.518 14:42:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.518 14:42:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.518 14:42:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:30.518 14:42:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.518 14:42:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.518 14:42:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.518 14:42:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:30.518 14:42:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.518 14:42:04 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:30.518 14:42:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.518 14:42:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.518 14:42:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:30.518 14:42:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.518 14:42:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.518 14:42:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.518 14:42:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:30.518 14:42:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.518 14:42:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.518 14:42:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.518 14:42:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:30.518 14:42:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.518 14:42:04 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:30.518 14:42:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.518 14:42:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.518 14:42:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:30.518 14:42:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.518 14:42:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.518 14:42:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.518 14:42:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:30.518 14:42:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.518 14:42:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.518 14:42:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.518 14:42:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:30.518 14:42:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.518 14:42:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.518 14:42:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.518 14:42:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:30.518 14:42:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.518 14:42:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.518 14:42:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.518 14:42:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:30.518 14:42:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.518 14:42:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.518 14:42:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.518 14:42:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:30.518 14:42:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.518 14:42:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.518 14:42:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.518 14:42:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:30.518 14:42:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.518 14:42:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.518 14:42:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:30.518 14:42:04 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:30.518 14:42:04 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:30.518 14:42:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:30.518 14:42:04 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.454 14:42:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:31.454 14:42:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:31.454 14:42:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:31.454 14:42:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.454 14:42:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:31.454 14:42:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:31.454 14:42:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:31.454 14:42:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.454 14:42:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:31.454 14:42:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:31.454 14:42:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:31.454 14:42:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.454 14:42:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:31.454 14:42:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:31.454 14:42:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:31.454 14:42:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.454 14:42:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:31.454 14:42:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:31.454 14:42:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:31.454 14:42:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.454 14:42:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:31.454 14:42:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:31.454 14:42:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:31.454 14:42:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.454 14:42:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:31.454 14:42:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:31.454 14:42:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:31.454 14:42:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.454 14:42:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:31.454 14:42:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:31.454 14:42:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:31.454 14:42:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.454 14:42:05 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:31.454 14:42:05 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:31.454 14:42:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:31.454 14:42:05 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.454 14:42:05 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:31.454 14:42:05 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:31.454 14:42:05 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:31.454 00:07:31.454 real 0m1.349s 00:07:31.454 user 0m4.576s 00:07:31.454 sys 0m0.118s 00:07:31.454 14:42:05 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:31.454 14:42:05 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:31.454 ************************************ 00:07:31.454 END TEST accel_decomp_mcore 00:07:31.454 ************************************ 00:07:31.713 14:42:05 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:31.713 14:42:05 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:31.713 14:42:05 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:31.713 14:42:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:31.713 14:42:05 accel -- common/autotest_common.sh@10 -- # set +x 00:07:31.713 ************************************ 00:07:31.713 START TEST accel_decomp_full_mcore 00:07:31.713 ************************************ 00:07:31.713 14:42:05 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:31.713 14:42:05 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:31.713 14:42:05 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:31.713 14:42:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:31.713 14:42:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.713 14:42:05 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:31.713 14:42:05 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:31.713 14:42:05 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:31.714 14:42:05 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:31.714 14:42:05 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:31.714 14:42:05 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.714 14:42:05 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.714 14:42:05 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:31.714 14:42:05 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:31.714 14:42:05 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:31.714 [2024-07-15 14:42:05.439278] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:07:31.714 [2024-07-15 14:42:05.439332] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2696243 ] 00:07:31.714 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.714 [2024-07-15 14:42:05.495791] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:31.714 [2024-07-15 14:42:05.571690] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:31.714 [2024-07-15 14:42:05.571785] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:31.714 [2024-07-15 14:42:05.571891] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:31.714 [2024-07-15 14:42:05.571893] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.714 14:42:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:31.714 14:42:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:31.714 14:42:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:31.714 14:42:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.714 14:42:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:31.714 14:42:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:31.714 14:42:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:31.714 14:42:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.714 14:42:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:31.714 14:42:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:31.714 14:42:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:31.714 14:42:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.714 14:42:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:31.714 14:42:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:31.714 14:42:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:31.714 14:42:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.714 14:42:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:31.714 14:42:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:31.714 14:42:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:31.714 14:42:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.714 14:42:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:31.714 14:42:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:31.714 14:42:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:31.714 14:42:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.714 14:42:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:31.714 14:42:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:31.714 14:42:05 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:31.714 14:42:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:31.714 14:42:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.714 14:42:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:31.714 14:42:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:31.714 14:42:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:31.714 14:42:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.714 14:42:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:31.714 14:42:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:31.714 14:42:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:31.714 14:42:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.714 14:42:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:31.714 14:42:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:31.714 14:42:05 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:31.714 14:42:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:31.714 14:42:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.714 14:42:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:31.714 14:42:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:31.714 14:42:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:31.714 14:42:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.714 14:42:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:31.714 14:42:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:31.714 14:42:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:31.714 14:42:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.714 14:42:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:31.714 14:42:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:31.714 14:42:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:31.714 14:42:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.714 14:42:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:31.714 14:42:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:31.714 14:42:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:31.714 14:42:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.714 14:42:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:31.714 14:42:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:31.714 14:42:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:31.714 14:42:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.714 14:42:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:31.714 14:42:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:31.714 14:42:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:31.714 14:42:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.714 14:42:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:31.714 14:42:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:31.714 14:42:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:31.714 14:42:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:31.714 14:42:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:31.714 14:42:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:31.714 14:42:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:31.714 14:42:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:33.091 14:42:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:33.092 14:42:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:33.092 14:42:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:33.092 14:42:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:33.092 14:42:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:33.092 14:42:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:33.092 14:42:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:33.092 14:42:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:33.092 14:42:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:33.092 14:42:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:33.092 14:42:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:33.092 14:42:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:33.092 14:42:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:33.092 14:42:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:33.092 14:42:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:33.092 14:42:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:33.092 14:42:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:33.092 14:42:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:33.092 14:42:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:33.092 14:42:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:33.092 14:42:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:33.092 14:42:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:33.092 14:42:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:33.092 14:42:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:33.092 14:42:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:33.092 14:42:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:33.092 14:42:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:33.092 14:42:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:33.092 14:42:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:33.092 14:42:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:33.092 14:42:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:33.092 14:42:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:33.092 14:42:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:33.092 14:42:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:33.092 14:42:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:33.092 14:42:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:33.092 14:42:06 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:33.092 14:42:06 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:33.092 14:42:06 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:33.092 00:07:33.092 real 0m1.362s 00:07:33.092 user 0m4.618s 00:07:33.092 sys 0m0.118s 00:07:33.092 14:42:06 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:33.092 14:42:06 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:33.092 ************************************ 00:07:33.092 END TEST accel_decomp_full_mcore 00:07:33.092 ************************************ 00:07:33.092 14:42:06 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:33.092 14:42:06 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:33.092 14:42:06 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:33.092 14:42:06 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:33.092 14:42:06 accel -- common/autotest_common.sh@10 -- # set +x 00:07:33.092 ************************************ 00:07:33.092 START TEST accel_decomp_mthread 00:07:33.092 ************************************ 00:07:33.092 14:42:06 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:33.092 14:42:06 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:33.092 14:42:06 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:33.092 14:42:06 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:33.092 14:42:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.092 14:42:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.092 14:42:06 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:33.092 14:42:06 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:33.092 14:42:06 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:33.092 14:42:06 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:33.092 14:42:06 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:33.092 14:42:06 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:33.092 14:42:06 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:33.092 14:42:06 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:33.092 14:42:06 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:33.092 [2024-07-15 14:42:06.850441] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:07:33.092 [2024-07-15 14:42:06.850478] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2696502 ] 00:07:33.092 EAL: No free 2048 kB hugepages reported on node 1 00:07:33.092 [2024-07-15 14:42:06.904220] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.092 [2024-07-15 14:42:06.976996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.350 14:42:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:33.350 14:42:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.350 14:42:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.350 14:42:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.350 14:42:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:33.350 14:42:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.350 14:42:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.350 14:42:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.350 14:42:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:33.350 14:42:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.350 14:42:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.350 14:42:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.350 14:42:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:33.350 14:42:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.350 14:42:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.350 14:42:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.350 14:42:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:33.350 14:42:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.350 14:42:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.350 14:42:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.350 14:42:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:33.351 14:42:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.351 14:42:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.351 14:42:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.351 14:42:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:33.351 14:42:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.351 14:42:07 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:33.351 14:42:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.351 14:42:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.351 14:42:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:33.351 14:42:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.351 14:42:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.351 14:42:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.351 14:42:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:33.351 14:42:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.351 14:42:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.351 14:42:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.351 14:42:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:33.351 14:42:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.351 14:42:07 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:33.351 14:42:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.351 14:42:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.351 14:42:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:33.351 14:42:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.351 14:42:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.351 14:42:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.351 14:42:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:33.351 14:42:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.351 14:42:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.351 14:42:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.351 14:42:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:33.351 14:42:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.351 14:42:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.351 14:42:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.351 14:42:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:33.351 14:42:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.351 14:42:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.351 14:42:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.351 14:42:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:33.351 14:42:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.351 14:42:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.351 14:42:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.351 14:42:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:33.351 14:42:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.351 14:42:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.351 14:42:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.351 14:42:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:33.351 14:42:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.351 14:42:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.351 14:42:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:33.351 14:42:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:33.351 14:42:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:33.351 14:42:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:33.351 14:42:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:34.285 14:42:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:34.285 14:42:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:34.285 14:42:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:34.285 14:42:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:34.285 14:42:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:34.285 14:42:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:34.285 14:42:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:34.285 14:42:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:34.285 14:42:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:34.285 14:42:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:34.285 14:42:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:34.285 14:42:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:34.285 14:42:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:34.285 14:42:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:34.285 14:42:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:34.285 14:42:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:34.285 14:42:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:34.285 14:42:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:34.285 14:42:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:34.285 14:42:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:34.285 14:42:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:34.285 14:42:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:34.285 14:42:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:34.285 14:42:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:34.285 14:42:08 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:34.285 14:42:08 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:34.285 14:42:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:34.285 14:42:08 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:34.285 14:42:08 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:34.285 14:42:08 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:34.285 14:42:08 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:34.285 00:07:34.285 real 0m1.327s 00:07:34.285 user 0m1.227s 00:07:34.285 sys 0m0.115s 00:07:34.285 14:42:08 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:34.285 14:42:08 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:34.285 ************************************ 00:07:34.285 END TEST accel_decomp_mthread 00:07:34.285 ************************************ 00:07:34.285 14:42:08 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:34.285 14:42:08 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:34.285 14:42:08 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:34.285 14:42:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:34.285 14:42:08 accel -- common/autotest_common.sh@10 -- # set +x 00:07:34.544 ************************************ 00:07:34.544 START TEST accel_decomp_full_mthread 00:07:34.544 ************************************ 00:07:34.544 14:42:08 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:34.544 14:42:08 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:34.544 14:42:08 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:34.544 14:42:08 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:34.544 14:42:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:34.544 14:42:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:34.544 14:42:08 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:34.544 14:42:08 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:34.544 14:42:08 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:34.544 14:42:08 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:34.544 14:42:08 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:34.544 14:42:08 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:34.544 14:42:08 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:34.544 14:42:08 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:34.544 14:42:08 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:34.544 [2024-07-15 14:42:08.233908] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:07:34.544 [2024-07-15 14:42:08.233946] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2696901 ] 00:07:34.544 EAL: No free 2048 kB hugepages reported on node 1 00:07:34.544 [2024-07-15 14:42:08.281005] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.544 [2024-07-15 14:42:08.354043] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.544 14:42:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:34.544 14:42:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:34.544 14:42:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:34.544 14:42:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:34.544 14:42:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:34.544 14:42:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:34.544 14:42:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:34.544 14:42:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:34.544 14:42:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:34.544 14:42:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:34.544 14:42:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:34.544 14:42:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:34.544 14:42:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:34.544 14:42:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:34.544 14:42:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:34.544 14:42:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:34.544 14:42:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:34.544 14:42:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:34.544 14:42:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:34.544 14:42:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:34.544 14:42:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:34.544 14:42:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:34.544 14:42:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:34.544 14:42:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:34.544 14:42:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:34.544 14:42:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:34.544 14:42:08 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:34.544 14:42:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:34.544 14:42:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:34.544 14:42:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:34.544 14:42:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:34.544 14:42:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:34.544 14:42:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:34.544 14:42:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:34.544 14:42:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:34.544 14:42:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:34.544 14:42:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:34.544 14:42:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:34.544 14:42:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:34.544 14:42:08 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:34.544 14:42:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:34.544 14:42:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:34.544 14:42:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:34.544 14:42:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:34.544 14:42:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:34.544 14:42:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:34.544 14:42:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:34.544 14:42:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:34.544 14:42:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:34.544 14:42:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:34.544 14:42:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:34.544 14:42:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:34.544 14:42:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:34.544 14:42:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:34.544 14:42:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:34.544 14:42:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:34.544 14:42:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:34.544 14:42:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:34.544 14:42:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:34.544 14:42:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:34.544 14:42:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:34.544 14:42:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:34.544 14:42:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:34.544 14:42:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:34.544 14:42:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:34.544 14:42:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:34.544 14:42:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:34.544 14:42:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:34.544 14:42:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:34.544 14:42:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:34.544 14:42:08 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:34.544 14:42:08 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:34.544 14:42:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:34.544 14:42:08 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:35.922 14:42:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:35.922 14:42:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:35.922 14:42:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:35.922 14:42:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:35.922 14:42:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:35.922 14:42:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:35.922 14:42:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:35.922 14:42:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:35.922 14:42:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:35.922 14:42:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:35.922 14:42:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:35.922 14:42:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:35.922 14:42:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:35.922 14:42:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:35.922 14:42:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:35.922 14:42:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:35.922 14:42:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:35.922 14:42:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:35.922 14:42:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:35.922 14:42:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:35.922 14:42:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:35.922 14:42:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:35.922 14:42:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:35.922 14:42:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:35.922 14:42:09 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:35.922 14:42:09 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:35.922 14:42:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:35.922 14:42:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:35.922 14:42:09 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:35.922 14:42:09 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:35.922 14:42:09 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:35.922 00:07:35.922 real 0m1.345s 00:07:35.922 user 0m1.257s 00:07:35.922 sys 0m0.102s 00:07:35.922 14:42:09 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:35.922 14:42:09 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:35.922 ************************************ 00:07:35.922 END TEST accel_decomp_full_mthread 00:07:35.922 ************************************ 00:07:35.922 14:42:09 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:35.922 14:42:09 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:35.922 14:42:09 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:35.922 14:42:09 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:35.922 14:42:09 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:35.922 14:42:09 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:35.922 14:42:09 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:35.922 14:42:09 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:35.922 14:42:09 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:35.922 14:42:09 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:35.922 14:42:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.922 14:42:09 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:35.922 14:42:09 accel -- accel/accel.sh@41 -- # jq -r . 00:07:35.922 14:42:09 accel -- common/autotest_common.sh@10 -- # set +x 00:07:35.922 ************************************ 00:07:35.922 START TEST accel_dif_functional_tests 00:07:35.922 ************************************ 00:07:35.922 14:42:09 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:35.922 [2024-07-15 14:42:09.660372] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:07:35.922 [2024-07-15 14:42:09.660408] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2697375 ] 00:07:35.922 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.922 [2024-07-15 14:42:09.712630] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:35.922 [2024-07-15 14:42:09.785920] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:35.922 [2024-07-15 14:42:09.786019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.922 [2024-07-15 14:42:09.786019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:36.182 00:07:36.182 00:07:36.182 CUnit - A unit testing framework for C - Version 2.1-3 00:07:36.182 http://cunit.sourceforge.net/ 00:07:36.182 00:07:36.182 00:07:36.182 Suite: accel_dif 00:07:36.182 Test: verify: DIF generated, GUARD check ...passed 00:07:36.182 Test: verify: DIF generated, APPTAG check ...passed 00:07:36.182 Test: verify: DIF generated, REFTAG check ...passed 00:07:36.182 Test: verify: DIF not generated, GUARD check ...[2024-07-15 14:42:09.852954] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:36.182 passed 00:07:36.182 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 14:42:09.852999] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:36.182 passed 00:07:36.182 Test: verify: DIF not generated, REFTAG check ...[2024-07-15 14:42:09.853033] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:36.182 passed 00:07:36.182 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:36.182 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 14:42:09.853074] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:36.182 passed 00:07:36.182 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:36.182 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:36.182 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:36.182 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-15 14:42:09.853169] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:36.182 passed 00:07:36.182 Test: verify copy: DIF generated, GUARD check ...passed 00:07:36.182 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:36.182 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:36.182 Test: verify copy: DIF not generated, GUARD check ...[2024-07-15 14:42:09.853272] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:36.182 passed 00:07:36.182 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 14:42:09.853294] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:36.182 passed 00:07:36.182 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 14:42:09.853313] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:36.182 passed 00:07:36.182 Test: generate copy: DIF generated, GUARD check ...passed 00:07:36.182 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:36.182 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:36.182 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:36.182 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:36.182 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:36.182 Test: generate copy: iovecs-len validate ...[2024-07-15 14:42:09.853466] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:36.182 passed 00:07:36.182 Test: generate copy: buffer alignment validate ...passed 00:07:36.182 00:07:36.182 Run Summary: Type Total Ran Passed Failed Inactive 00:07:36.182 suites 1 1 n/a 0 0 00:07:36.182 tests 26 26 26 0 0 00:07:36.182 asserts 115 115 115 0 n/a 00:07:36.182 00:07:36.182 Elapsed time = 0.000 seconds 00:07:36.182 00:07:36.182 real 0m0.403s 00:07:36.182 user 0m0.599s 00:07:36.182 sys 0m0.140s 00:07:36.182 14:42:10 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:36.182 14:42:10 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:36.182 ************************************ 00:07:36.182 END TEST accel_dif_functional_tests 00:07:36.182 ************************************ 00:07:36.182 14:42:10 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:36.182 00:07:36.182 real 0m30.920s 00:07:36.182 user 0m34.823s 00:07:36.182 sys 0m4.147s 00:07:36.182 14:42:10 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:36.182 14:42:10 accel -- common/autotest_common.sh@10 -- # set +x 00:07:36.182 ************************************ 00:07:36.182 END TEST accel 00:07:36.182 ************************************ 00:07:36.182 14:42:10 -- common/autotest_common.sh@1142 -- # return 0 00:07:36.182 14:42:10 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:36.182 14:42:10 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:36.182 14:42:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:36.183 14:42:10 -- common/autotest_common.sh@10 -- # set +x 00:07:36.441 ************************************ 00:07:36.441 START TEST accel_rpc 00:07:36.441 ************************************ 00:07:36.441 14:42:10 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:36.441 * Looking for test storage... 00:07:36.441 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel 00:07:36.441 14:42:10 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:36.441 14:42:10 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=2697457 00:07:36.441 14:42:10 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 2697457 00:07:36.441 14:42:10 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:36.441 14:42:10 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 2697457 ']' 00:07:36.441 14:42:10 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.441 14:42:10 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:36.441 14:42:10 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.441 14:42:10 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:36.441 14:42:10 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.441 [2024-07-15 14:42:10.251568] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:07:36.441 [2024-07-15 14:42:10.251620] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2697457 ] 00:07:36.441 EAL: No free 2048 kB hugepages reported on node 1 00:07:36.442 [2024-07-15 14:42:10.307588] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.699 [2024-07-15 14:42:10.388464] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.266 14:42:11 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:37.267 14:42:11 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:37.267 14:42:11 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:37.267 14:42:11 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:37.267 14:42:11 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:37.267 14:42:11 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:37.267 14:42:11 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:37.267 14:42:11 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:37.267 14:42:11 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.267 14:42:11 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:37.267 ************************************ 00:07:37.267 START TEST accel_assign_opcode 00:07:37.267 ************************************ 00:07:37.267 14:42:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:07:37.267 14:42:11 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:37.267 14:42:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.267 14:42:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:37.267 [2024-07-15 14:42:11.078506] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:37.267 14:42:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.267 14:42:11 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:37.267 14:42:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.267 14:42:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:37.267 [2024-07-15 14:42:11.086520] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:37.267 14:42:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.267 14:42:11 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:37.267 14:42:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.267 14:42:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:37.526 14:42:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.526 14:42:11 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:37.526 14:42:11 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:37.526 14:42:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.526 14:42:11 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:37.526 14:42:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:37.526 14:42:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:37.526 software 00:07:37.526 00:07:37.526 real 0m0.232s 00:07:37.526 user 0m0.048s 00:07:37.526 sys 0m0.008s 00:07:37.526 14:42:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:37.526 14:42:11 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:37.526 ************************************ 00:07:37.526 END TEST accel_assign_opcode 00:07:37.526 ************************************ 00:07:37.526 14:42:11 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:37.526 14:42:11 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 2697457 00:07:37.526 14:42:11 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 2697457 ']' 00:07:37.526 14:42:11 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 2697457 00:07:37.526 14:42:11 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:07:37.526 14:42:11 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:37.526 14:42:11 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2697457 00:07:37.526 14:42:11 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:37.526 14:42:11 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:37.526 14:42:11 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2697457' 00:07:37.526 killing process with pid 2697457 00:07:37.526 14:42:11 accel_rpc -- common/autotest_common.sh@967 -- # kill 2697457 00:07:37.526 14:42:11 accel_rpc -- common/autotest_common.sh@972 -- # wait 2697457 00:07:37.785 00:07:37.785 real 0m1.576s 00:07:37.785 user 0m1.651s 00:07:37.785 sys 0m0.415s 00:07:37.785 14:42:11 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:37.785 14:42:11 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:37.785 ************************************ 00:07:37.785 END TEST accel_rpc 00:07:37.785 ************************************ 00:07:38.044 14:42:11 -- common/autotest_common.sh@1142 -- # return 0 00:07:38.044 14:42:11 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:07:38.044 14:42:11 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:38.044 14:42:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:38.044 14:42:11 -- common/autotest_common.sh@10 -- # set +x 00:07:38.044 ************************************ 00:07:38.044 START TEST app_cmdline 00:07:38.044 ************************************ 00:07:38.044 14:42:11 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:07:38.044 * Looking for test storage... 00:07:38.044 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:07:38.044 14:42:11 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:38.044 14:42:11 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:38.044 14:42:11 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2697828 00:07:38.044 14:42:11 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2697828 00:07:38.044 14:42:11 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 2697828 ']' 00:07:38.044 14:42:11 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:38.044 14:42:11 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:38.044 14:42:11 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:38.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:38.044 14:42:11 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:38.044 14:42:11 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:38.044 [2024-07-15 14:42:11.881224] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:07:38.044 [2024-07-15 14:42:11.881277] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2697828 ] 00:07:38.044 EAL: No free 2048 kB hugepages reported on node 1 00:07:38.044 [2024-07-15 14:42:11.936294] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.303 [2024-07-15 14:42:12.018992] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.871 14:42:12 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:38.871 14:42:12 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:07:38.871 14:42:12 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:39.130 { 00:07:39.130 "version": "SPDK v24.09-pre git sha1 bd4841ef7", 00:07:39.130 "fields": { 00:07:39.130 "major": 24, 00:07:39.130 "minor": 9, 00:07:39.130 "patch": 0, 00:07:39.130 "suffix": "-pre", 00:07:39.130 "commit": "bd4841ef7" 00:07:39.130 } 00:07:39.130 } 00:07:39.130 14:42:12 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:39.130 14:42:12 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:39.130 14:42:12 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:39.130 14:42:12 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:39.130 14:42:12 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:39.130 14:42:12 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.130 14:42:12 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:39.130 14:42:12 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:39.130 14:42:12 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:39.130 14:42:12 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.130 14:42:12 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:39.130 14:42:12 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:39.130 14:42:12 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:39.130 14:42:12 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:39.130 14:42:12 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:39.130 14:42:12 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:39.130 14:42:12 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:39.130 14:42:12 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:39.130 14:42:12 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:39.130 14:42:12 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:39.130 14:42:12 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:39.130 14:42:12 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:39.130 14:42:12 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:07:39.130 14:42:12 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:39.130 request: 00:07:39.130 { 00:07:39.130 "method": "env_dpdk_get_mem_stats", 00:07:39.130 "req_id": 1 00:07:39.130 } 00:07:39.130 Got JSON-RPC error response 00:07:39.130 response: 00:07:39.130 { 00:07:39.130 "code": -32601, 00:07:39.130 "message": "Method not found" 00:07:39.130 } 00:07:39.390 14:42:13 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:39.390 14:42:13 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:39.390 14:42:13 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:39.390 14:42:13 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:39.390 14:42:13 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2697828 00:07:39.390 14:42:13 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 2697828 ']' 00:07:39.390 14:42:13 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 2697828 00:07:39.390 14:42:13 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:07:39.390 14:42:13 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:39.390 14:42:13 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2697828 00:07:39.390 14:42:13 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:39.390 14:42:13 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:39.390 14:42:13 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2697828' 00:07:39.390 killing process with pid 2697828 00:07:39.390 14:42:13 app_cmdline -- common/autotest_common.sh@967 -- # kill 2697828 00:07:39.390 14:42:13 app_cmdline -- common/autotest_common.sh@972 -- # wait 2697828 00:07:39.649 00:07:39.649 real 0m1.655s 00:07:39.649 user 0m1.967s 00:07:39.649 sys 0m0.418s 00:07:39.649 14:42:13 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:39.649 14:42:13 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:39.649 ************************************ 00:07:39.649 END TEST app_cmdline 00:07:39.649 ************************************ 00:07:39.649 14:42:13 -- common/autotest_common.sh@1142 -- # return 0 00:07:39.649 14:42:13 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:07:39.649 14:42:13 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:39.649 14:42:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:39.649 14:42:13 -- common/autotest_common.sh@10 -- # set +x 00:07:39.649 ************************************ 00:07:39.649 START TEST version 00:07:39.649 ************************************ 00:07:39.649 14:42:13 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:07:39.649 * Looking for test storage... 00:07:39.649 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:07:39.908 14:42:13 version -- app/version.sh@17 -- # get_header_version major 00:07:39.908 14:42:13 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:39.908 14:42:13 version -- app/version.sh@14 -- # cut -f2 00:07:39.908 14:42:13 version -- app/version.sh@14 -- # tr -d '"' 00:07:39.908 14:42:13 version -- app/version.sh@17 -- # major=24 00:07:39.908 14:42:13 version -- app/version.sh@18 -- # get_header_version minor 00:07:39.908 14:42:13 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:39.908 14:42:13 version -- app/version.sh@14 -- # tr -d '"' 00:07:39.908 14:42:13 version -- app/version.sh@14 -- # cut -f2 00:07:39.908 14:42:13 version -- app/version.sh@18 -- # minor=9 00:07:39.908 14:42:13 version -- app/version.sh@19 -- # get_header_version patch 00:07:39.908 14:42:13 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:39.908 14:42:13 version -- app/version.sh@14 -- # tr -d '"' 00:07:39.908 14:42:13 version -- app/version.sh@14 -- # cut -f2 00:07:39.908 14:42:13 version -- app/version.sh@19 -- # patch=0 00:07:39.908 14:42:13 version -- app/version.sh@20 -- # get_header_version suffix 00:07:39.908 14:42:13 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:39.908 14:42:13 version -- app/version.sh@14 -- # cut -f2 00:07:39.908 14:42:13 version -- app/version.sh@14 -- # tr -d '"' 00:07:39.908 14:42:13 version -- app/version.sh@20 -- # suffix=-pre 00:07:39.908 14:42:13 version -- app/version.sh@22 -- # version=24.9 00:07:39.908 14:42:13 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:39.908 14:42:13 version -- app/version.sh@28 -- # version=24.9rc0 00:07:39.908 14:42:13 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:07:39.908 14:42:13 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:39.908 14:42:13 version -- app/version.sh@30 -- # py_version=24.9rc0 00:07:39.908 14:42:13 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:07:39.908 00:07:39.908 real 0m0.156s 00:07:39.908 user 0m0.079s 00:07:39.908 sys 0m0.109s 00:07:39.908 14:42:13 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:39.908 14:42:13 version -- common/autotest_common.sh@10 -- # set +x 00:07:39.908 ************************************ 00:07:39.908 END TEST version 00:07:39.908 ************************************ 00:07:39.908 14:42:13 -- common/autotest_common.sh@1142 -- # return 0 00:07:39.908 14:42:13 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:07:39.908 14:42:13 -- spdk/autotest.sh@198 -- # uname -s 00:07:39.908 14:42:13 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:07:39.908 14:42:13 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:39.908 14:42:13 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:39.908 14:42:13 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:39.908 14:42:13 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:39.908 14:42:13 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:39.908 14:42:13 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:39.908 14:42:13 -- common/autotest_common.sh@10 -- # set +x 00:07:39.908 14:42:13 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:39.908 14:42:13 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:39.908 14:42:13 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:07:39.908 14:42:13 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:07:39.908 14:42:13 -- spdk/autotest.sh@283 -- # '[' rdma = rdma ']' 00:07:39.908 14:42:13 -- spdk/autotest.sh@284 -- # run_test nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:07:39.908 14:42:13 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:39.908 14:42:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:39.908 14:42:13 -- common/autotest_common.sh@10 -- # set +x 00:07:39.908 ************************************ 00:07:39.908 START TEST nvmf_rdma 00:07:39.908 ************************************ 00:07:39.908 14:42:13 nvmf_rdma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:07:39.908 * Looking for test storage... 00:07:40.168 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:07:40.168 14:42:13 nvmf_rdma -- nvmf/nvmf.sh@10 -- # uname -s 00:07:40.168 14:42:13 nvmf_rdma -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:40.168 14:42:13 nvmf_rdma -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:40.168 14:42:13 nvmf_rdma -- nvmf/common.sh@7 -- # uname -s 00:07:40.168 14:42:13 nvmf_rdma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:40.168 14:42:13 nvmf_rdma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:40.168 14:42:13 nvmf_rdma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:40.168 14:42:13 nvmf_rdma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:40.168 14:42:13 nvmf_rdma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:40.168 14:42:13 nvmf_rdma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:40.168 14:42:13 nvmf_rdma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:40.168 14:42:13 nvmf_rdma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:40.168 14:42:13 nvmf_rdma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:40.168 14:42:13 nvmf_rdma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:40.168 14:42:13 nvmf_rdma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:07:40.168 14:42:13 nvmf_rdma -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:07:40.168 14:42:13 nvmf_rdma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:40.168 14:42:13 nvmf_rdma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:40.168 14:42:13 nvmf_rdma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:40.168 14:42:13 nvmf_rdma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:40.168 14:42:13 nvmf_rdma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:40.168 14:42:13 nvmf_rdma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:40.168 14:42:13 nvmf_rdma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:40.168 14:42:13 nvmf_rdma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:40.168 14:42:13 nvmf_rdma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.168 14:42:13 nvmf_rdma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.168 14:42:13 nvmf_rdma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.168 14:42:13 nvmf_rdma -- paths/export.sh@5 -- # export PATH 00:07:40.168 14:42:13 nvmf_rdma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.168 14:42:13 nvmf_rdma -- nvmf/common.sh@47 -- # : 0 00:07:40.168 14:42:13 nvmf_rdma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:40.168 14:42:13 nvmf_rdma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:40.168 14:42:13 nvmf_rdma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:40.168 14:42:13 nvmf_rdma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:40.168 14:42:13 nvmf_rdma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:40.168 14:42:13 nvmf_rdma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:40.168 14:42:13 nvmf_rdma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:40.168 14:42:13 nvmf_rdma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:40.168 14:42:13 nvmf_rdma -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:40.168 14:42:13 nvmf_rdma -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:40.168 14:42:13 nvmf_rdma -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:40.168 14:42:13 nvmf_rdma -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:40.168 14:42:13 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:07:40.168 14:42:13 nvmf_rdma -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:40.168 14:42:13 nvmf_rdma -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:07:40.168 14:42:13 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:40.168 14:42:13 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:40.168 14:42:13 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:07:40.168 ************************************ 00:07:40.168 START TEST nvmf_example 00:07:40.168 ************************************ 00:07:40.168 14:42:13 nvmf_rdma.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:07:40.168 * Looking for test storage... 00:07:40.168 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:40.168 14:42:13 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:40.168 14:42:13 nvmf_rdma.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:07:40.168 14:42:13 nvmf_rdma.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:40.168 14:42:13 nvmf_rdma.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:40.168 14:42:13 nvmf_rdma.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:40.168 14:42:13 nvmf_rdma.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:40.168 14:42:13 nvmf_rdma.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:40.168 14:42:13 nvmf_rdma.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:40.168 14:42:13 nvmf_rdma.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:40.168 14:42:13 nvmf_rdma.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:40.168 14:42:13 nvmf_rdma.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:40.168 14:42:13 nvmf_rdma.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:40.168 14:42:13 nvmf_rdma.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:07:40.168 14:42:13 nvmf_rdma.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:07:40.168 14:42:13 nvmf_rdma.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:40.168 14:42:14 nvmf_rdma.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:40.168 14:42:14 nvmf_rdma.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:40.168 14:42:14 nvmf_rdma.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:40.168 14:42:14 nvmf_rdma.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:40.168 14:42:14 nvmf_rdma.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:40.168 14:42:14 nvmf_rdma.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:40.168 14:42:14 nvmf_rdma.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:40.168 14:42:14 nvmf_rdma.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.169 14:42:14 nvmf_rdma.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.169 14:42:14 nvmf_rdma.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.169 14:42:14 nvmf_rdma.nvmf_example -- paths/export.sh@5 -- # export PATH 00:07:40.169 14:42:14 nvmf_rdma.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.169 14:42:14 nvmf_rdma.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:07:40.169 14:42:14 nvmf_rdma.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:40.169 14:42:14 nvmf_rdma.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:40.169 14:42:14 nvmf_rdma.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:40.169 14:42:14 nvmf_rdma.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:40.169 14:42:14 nvmf_rdma.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:40.169 14:42:14 nvmf_rdma.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:40.169 14:42:14 nvmf_rdma.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:40.169 14:42:14 nvmf_rdma.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:40.169 14:42:14 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:40.169 14:42:14 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:40.169 14:42:14 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:40.169 14:42:14 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:40.169 14:42:14 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:40.169 14:42:14 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:40.169 14:42:14 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:40.169 14:42:14 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:40.169 14:42:14 nvmf_rdma.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:40.169 14:42:14 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:40.169 14:42:14 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:40.169 14:42:14 nvmf_rdma.nvmf_example -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:07:40.169 14:42:14 nvmf_rdma.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:40.169 14:42:14 nvmf_rdma.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:40.169 14:42:14 nvmf_rdma.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:40.169 14:42:14 nvmf_rdma.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:40.169 14:42:14 nvmf_rdma.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:40.169 14:42:14 nvmf_rdma.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:40.169 14:42:14 nvmf_rdma.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:40.169 14:42:14 nvmf_rdma.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:40.169 14:42:14 nvmf_rdma.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:40.169 14:42:14 nvmf_rdma.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:07:40.169 14:42:14 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:07:45.441 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:07:45.441 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:07:45.441 Found net devices under 0000:da:00.0: mlx_0_0 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:07:45.441 Found net devices under 0000:da:00.1: mlx_0_1 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@420 -- # rdma_device_init 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@58 -- # uname 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@62 -- # modprobe ib_cm 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@63 -- # modprobe ib_core 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@64 -- # modprobe ib_umad 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@66 -- # modprobe iw_cm 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@502 -- # allocate_nic_ips 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@73 -- # get_rdma_if_list 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@105 -- # continue 2 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@105 -- # continue 2 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:07:45.441 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:45.441 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:07:45.441 altname enp218s0f0np0 00:07:45.441 altname ens818f0np0 00:07:45.441 inet 192.168.100.8/24 scope global mlx_0_0 00:07:45.441 valid_lft forever preferred_lft forever 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:07:45.441 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:45.441 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:07:45.441 altname enp218s0f1np1 00:07:45.441 altname ens818f1np1 00:07:45.441 inet 192.168.100.9/24 scope global mlx_0_1 00:07:45.441 valid_lft forever preferred_lft forever 00:07:45.441 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:07:45.442 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:45.442 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:45.442 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:07:45.442 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:07:45.442 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@86 -- # get_rdma_if_list 00:07:45.442 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:45.442 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:45.442 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:45.442 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:45.442 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:45.442 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:45.442 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:45.442 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:45.442 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:45.442 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@105 -- # continue 2 00:07:45.442 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:45.442 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:45.442 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:45.442 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:45.442 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:45.442 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:45.442 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@105 -- # continue 2 00:07:45.442 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:45.442 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:07:45.442 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:45.442 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:45.442 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:45.442 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:45.442 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:45.442 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:07:45.442 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:45.442 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:45.442 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:45.442 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:45.442 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:07:45.442 192.168.100.9' 00:07:45.442 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:07:45.442 192.168.100.9' 00:07:45.442 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@457 -- # head -n 1 00:07:45.442 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:45.442 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:07:45.442 192.168.100.9' 00:07:45.442 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@458 -- # head -n 1 00:07:45.442 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@458 -- # tail -n +2 00:07:45.442 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:45.442 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:07:45.442 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:45.442 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:07:45.699 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:07:45.699 14:42:19 nvmf_rdma.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:07:45.699 14:42:19 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:45.699 14:42:19 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:45.699 14:42:19 nvmf_rdma.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:45.699 14:42:19 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:45.699 14:42:19 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@29 -- # '[' rdma == tcp ']' 00:07:45.699 14:42:19 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2701390 00:07:45.699 14:42:19 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:45.699 14:42:19 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:45.699 14:42:19 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2701390 00:07:45.699 14:42:19 nvmf_rdma.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 2701390 ']' 00:07:45.699 14:42:19 nvmf_rdma.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.699 14:42:19 nvmf_rdma.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:45.699 14:42:19 nvmf_rdma.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.699 14:42:19 nvmf_rdma.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:45.699 14:42:19 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:45.699 EAL: No free 2048 kB hugepages reported on node 1 00:07:46.655 14:42:20 nvmf_rdma.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:46.655 14:42:20 nvmf_rdma.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:07:46.655 14:42:20 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:46.655 14:42:20 nvmf_rdma.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:46.655 14:42:20 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:46.655 14:42:20 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:07:46.655 14:42:20 nvmf_rdma.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.655 14:42:20 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:46.655 14:42:20 nvmf_rdma.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.655 14:42:20 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:46.655 14:42:20 nvmf_rdma.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.655 14:42:20 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:46.655 14:42:20 nvmf_rdma.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.655 14:42:20 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:46.655 14:42:20 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:46.655 14:42:20 nvmf_rdma.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.655 14:42:20 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:46.655 14:42:20 nvmf_rdma.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.655 14:42:20 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:46.655 14:42:20 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:46.655 14:42:20 nvmf_rdma.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.655 14:42:20 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:46.655 14:42:20 nvmf_rdma.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.655 14:42:20 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:46.655 14:42:20 nvmf_rdma.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.655 14:42:20 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:46.655 14:42:20 nvmf_rdma.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.655 14:42:20 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:46.655 14:42:20 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:46.655 EAL: No free 2048 kB hugepages reported on node 1 00:07:58.856 Initializing NVMe Controllers 00:07:58.856 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:07:58.856 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:58.856 Initialization complete. Launching workers. 00:07:58.856 ======================================================== 00:07:58.856 Latency(us) 00:07:58.856 Device Information : IOPS MiB/s Average min max 00:07:58.856 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 24865.38 97.13 2573.37 639.26 13471.36 00:07:58.856 ======================================================== 00:07:58.856 Total : 24865.38 97.13 2573.37 639.26 13471.36 00:07:58.856 00:07:58.856 14:42:31 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:58.856 14:42:31 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:58.856 14:42:31 nvmf_rdma.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:58.856 14:42:31 nvmf_rdma.nvmf_example -- nvmf/common.sh@117 -- # sync 00:07:58.856 14:42:31 nvmf_rdma.nvmf_example -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:07:58.856 14:42:31 nvmf_rdma.nvmf_example -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:07:58.856 14:42:31 nvmf_rdma.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:07:58.856 14:42:31 nvmf_rdma.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:58.856 14:42:31 nvmf_rdma.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:07:58.856 rmmod nvme_rdma 00:07:58.856 rmmod nvme_fabrics 00:07:58.856 14:42:31 nvmf_rdma.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:58.856 14:42:31 nvmf_rdma.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:07:58.856 14:42:31 nvmf_rdma.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:07:58.856 14:42:31 nvmf_rdma.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 2701390 ']' 00:07:58.856 14:42:31 nvmf_rdma.nvmf_example -- nvmf/common.sh@490 -- # killprocess 2701390 00:07:58.856 14:42:31 nvmf_rdma.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 2701390 ']' 00:07:58.856 14:42:31 nvmf_rdma.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 2701390 00:07:58.856 14:42:31 nvmf_rdma.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:07:58.856 14:42:31 nvmf_rdma.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:58.856 14:42:31 nvmf_rdma.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2701390 00:07:58.856 14:42:31 nvmf_rdma.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:07:58.856 14:42:31 nvmf_rdma.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:07:58.856 14:42:31 nvmf_rdma.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2701390' 00:07:58.856 killing process with pid 2701390 00:07:58.856 14:42:31 nvmf_rdma.nvmf_example -- common/autotest_common.sh@967 -- # kill 2701390 00:07:58.856 14:42:31 nvmf_rdma.nvmf_example -- common/autotest_common.sh@972 -- # wait 2701390 00:07:58.856 nvmf threads initialize successfully 00:07:58.856 bdev subsystem init successfully 00:07:58.856 created a nvmf target service 00:07:58.856 create targets's poll groups done 00:07:58.856 all subsystems of target started 00:07:58.856 nvmf target is running 00:07:58.856 all subsystems of target stopped 00:07:58.856 destroy targets's poll groups done 00:07:58.856 destroyed the nvmf target service 00:07:58.856 bdev subsystem finish successfully 00:07:58.856 nvmf threads destroy successfully 00:07:58.856 14:42:32 nvmf_rdma.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:58.856 14:42:32 nvmf_rdma.nvmf_example -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:07:58.856 14:42:32 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:58.856 14:42:32 nvmf_rdma.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:58.856 14:42:32 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:58.856 00:07:58.856 real 0m18.227s 00:07:58.856 user 0m51.726s 00:07:58.856 sys 0m4.486s 00:07:58.856 14:42:32 nvmf_rdma.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:58.856 14:42:32 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:58.856 ************************************ 00:07:58.856 END TEST nvmf_example 00:07:58.856 ************************************ 00:07:58.856 14:42:32 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:07:58.856 14:42:32 nvmf_rdma -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:07:58.856 14:42:32 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:58.856 14:42:32 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:58.856 14:42:32 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:07:58.856 ************************************ 00:07:58.856 START TEST nvmf_filesystem 00:07:58.856 ************************************ 00:07:58.856 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:07:58.856 * Looking for test storage... 00:07:58.856 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:58.856 14:42:32 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh 00:07:58.856 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:58.856 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:07:58.856 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:58.856 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:58.856 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:58.856 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output ']' 00:07:58.856 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:58.856 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh 00:07:58.856 14:42:32 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:58.856 14:42:32 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:58.856 14:42:32 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:58.856 14:42:32 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:58.856 14:42:32 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:58.856 14:42:32 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:58.856 14:42:32 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:58.856 14:42:32 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:58.856 14:42:32 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:58.856 14:42:32 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:58.856 14:42:32 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:58.856 14:42:32 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:58.856 14:42:32 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:58.856 14:42:32 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:58.856 14:42:32 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:58.856 14:42:32 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:58.856 14:42:32 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:58.856 14:42:32 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:58.856 14:42:32 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:07:58.856 14:42:32 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:58.856 14:42:32 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:58.856 14:42:32 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:58.857 14:42:32 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:58.857 14:42:32 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:58.857 14:42:32 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:58.857 14:42:32 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:58.857 14:42:32 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:58.857 14:42:32 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:58.857 14:42:32 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:58.857 14:42:32 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:58.857 14:42:32 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:58.857 14:42:32 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:58.857 14:42:32 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:58.857 14:42:32 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:58.857 14:42:32 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:58.857 14:42:32 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:07:58.857 14:42:32 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:58.857 14:42:32 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:58.857 14:42:32 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:58.857 14:42:32 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:58.857 14:42:32 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:58.857 14:42:32 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:58.857 14:42:32 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:58.857 14:42:32 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:58.857 14:42:32 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:58.857 14:42:32 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:58.857 14:42:32 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:58.857 14:42:32 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:58.857 14:42:32 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:58.857 14:42:32 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:58.857 14:42:32 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:58.857 14:42:32 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:07:58.857 14:42:32 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:58.857 14:42:32 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:58.857 14:42:32 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:58.857 14:42:32 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:58.857 14:42:32 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:58.857 14:42:32 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:58.857 14:42:32 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:58.857 14:42:32 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:58.857 14:42:32 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:58.857 14:42:32 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:58.857 14:42:32 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:07:58.857 14:42:32 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:58.857 14:42:32 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:58.857 14:42:32 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:58.857 14:42:32 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:58.857 14:42:32 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:58.857 14:42:32 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:58.857 14:42:32 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:58.857 14:42:32 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:58.857 14:42:32 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:58.857 14:42:32 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:58.857 14:42:32 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:58.857 14:42:32 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:58.857 14:42:32 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:58.857 14:42:32 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:07:58.857 14:42:32 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:58.857 14:42:32 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:58.857 14:42:32 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:58.857 14:42:32 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:58.857 14:42:32 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:58.857 14:42:32 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:58.857 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:07:58.857 14:42:32 nvmf_rdma.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:07:58.857 14:42:32 nvmf_rdma.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:07:58.857 14:42:32 nvmf_rdma.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:07:58.857 14:42:32 nvmf_rdma.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:07:58.857 14:42:32 nvmf_rdma.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:07:58.857 14:42:32 nvmf_rdma.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:07:58.857 14:42:32 nvmf_rdma.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:07:58.857 14:42:32 nvmf_rdma.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:58.857 14:42:32 nvmf_rdma.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:58.857 14:42:32 nvmf_rdma.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:58.857 14:42:32 nvmf_rdma.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:58.857 14:42:32 nvmf_rdma.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:58.857 14:42:32 nvmf_rdma.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:58.857 14:42:32 nvmf_rdma.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/config.h ]] 00:07:58.857 14:42:32 nvmf_rdma.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:58.857 #define SPDK_CONFIG_H 00:07:58.857 #define SPDK_CONFIG_APPS 1 00:07:58.857 #define SPDK_CONFIG_ARCH native 00:07:58.857 #undef SPDK_CONFIG_ASAN 00:07:58.857 #undef SPDK_CONFIG_AVAHI 00:07:58.857 #undef SPDK_CONFIG_CET 00:07:58.857 #define SPDK_CONFIG_COVERAGE 1 00:07:58.857 #define SPDK_CONFIG_CROSS_PREFIX 00:07:58.857 #undef SPDK_CONFIG_CRYPTO 00:07:58.857 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:58.857 #undef SPDK_CONFIG_CUSTOMOCF 00:07:58.857 #undef SPDK_CONFIG_DAOS 00:07:58.857 #define SPDK_CONFIG_DAOS_DIR 00:07:58.857 #define SPDK_CONFIG_DEBUG 1 00:07:58.857 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:58.857 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:07:58.857 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:58.857 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:58.857 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:58.857 #undef SPDK_CONFIG_DPDK_UADK 00:07:58.857 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:07:58.857 #define SPDK_CONFIG_EXAMPLES 1 00:07:58.857 #undef SPDK_CONFIG_FC 00:07:58.857 #define SPDK_CONFIG_FC_PATH 00:07:58.857 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:58.857 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:58.857 #undef SPDK_CONFIG_FUSE 00:07:58.857 #undef SPDK_CONFIG_FUZZER 00:07:58.857 #define SPDK_CONFIG_FUZZER_LIB 00:07:58.857 #undef SPDK_CONFIG_GOLANG 00:07:58.857 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:58.857 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:58.857 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:58.857 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:58.857 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:58.857 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:58.857 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:58.857 #define SPDK_CONFIG_IDXD 1 00:07:58.857 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:58.857 #undef SPDK_CONFIG_IPSEC_MB 00:07:58.857 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:58.857 #define SPDK_CONFIG_ISAL 1 00:07:58.857 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:58.857 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:58.857 #define SPDK_CONFIG_LIBDIR 00:07:58.857 #undef SPDK_CONFIG_LTO 00:07:58.857 #define SPDK_CONFIG_MAX_LCORES 128 00:07:58.857 #define SPDK_CONFIG_NVME_CUSE 1 00:07:58.857 #undef SPDK_CONFIG_OCF 00:07:58.857 #define SPDK_CONFIG_OCF_PATH 00:07:58.857 #define SPDK_CONFIG_OPENSSL_PATH 00:07:58.857 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:58.857 #define SPDK_CONFIG_PGO_DIR 00:07:58.857 #undef SPDK_CONFIG_PGO_USE 00:07:58.857 #define SPDK_CONFIG_PREFIX /usr/local 00:07:58.857 #undef SPDK_CONFIG_RAID5F 00:07:58.857 #undef SPDK_CONFIG_RBD 00:07:58.857 #define SPDK_CONFIG_RDMA 1 00:07:58.857 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:58.857 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:58.857 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:58.857 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:58.857 #define SPDK_CONFIG_SHARED 1 00:07:58.857 #undef SPDK_CONFIG_SMA 00:07:58.857 #define SPDK_CONFIG_TESTS 1 00:07:58.857 #undef SPDK_CONFIG_TSAN 00:07:58.857 #define SPDK_CONFIG_UBLK 1 00:07:58.857 #define SPDK_CONFIG_UBSAN 1 00:07:58.857 #undef SPDK_CONFIG_UNIT_TESTS 00:07:58.857 #undef SPDK_CONFIG_URING 00:07:58.857 #define SPDK_CONFIG_URING_PATH 00:07:58.857 #undef SPDK_CONFIG_URING_ZNS 00:07:58.857 #undef SPDK_CONFIG_USDT 00:07:58.857 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:58.857 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:58.857 #undef SPDK_CONFIG_VFIO_USER 00:07:58.857 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:58.857 #define SPDK_CONFIG_VHOST 1 00:07:58.857 #define SPDK_CONFIG_VIRTIO 1 00:07:58.857 #undef SPDK_CONFIG_VTUNE 00:07:58.857 #define SPDK_CONFIG_VTUNE_DIR 00:07:58.857 #define SPDK_CONFIG_WERROR 1 00:07:58.857 #define SPDK_CONFIG_WPDK_DIR 00:07:58.857 #undef SPDK_CONFIG_XNVME 00:07:58.857 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:58.857 14:42:32 nvmf_rdma.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/.run_test_name 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- pm/common@68 -- # uname -s 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power ]] 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@102 -- # : rdma 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:07:58.858 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@154 -- # : mlx5 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j96 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=rdma 00:07:58.859 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 2703547 ]] 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 2703547 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.BHN5vz 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target /tmp/spdk.BHN5vz/tests/target /tmp/spdk.BHN5vz 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=1050284032 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4234145792 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=189891137536 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=195974311936 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=6083174400 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=97931517952 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=97987153920 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=55635968 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=39185481728 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=39194865664 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9383936 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=97986301952 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=97987158016 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=856064 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=19597426688 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=19597430784 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:07:58.860 * Looking for test storage... 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=189891137536 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=8297766912 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:58.860 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:07:58.860 14:42:32 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:58.861 14:42:32 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:58.861 14:42:32 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:58.861 14:42:32 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:58.861 14:42:32 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:58.861 14:42:32 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:58.861 14:42:32 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:58.861 14:42:32 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:58.861 14:42:32 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:58.861 14:42:32 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:58.861 14:42:32 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:07:58.861 14:42:32 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:07:58.861 14:42:32 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:58.861 14:42:32 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:58.861 14:42:32 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:58.861 14:42:32 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:58.861 14:42:32 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:58.861 14:42:32 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:58.861 14:42:32 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:58.861 14:42:32 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:58.861 14:42:32 nvmf_rdma.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.861 14:42:32 nvmf_rdma.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.861 14:42:32 nvmf_rdma.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.861 14:42:32 nvmf_rdma.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:58.861 14:42:32 nvmf_rdma.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.861 14:42:32 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:07:58.861 14:42:32 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:58.861 14:42:32 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:58.861 14:42:32 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:58.861 14:42:32 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:58.861 14:42:32 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:58.861 14:42:32 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:58.861 14:42:32 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:58.861 14:42:32 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:58.861 14:42:32 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:58.861 14:42:32 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:58.861 14:42:32 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:07:58.861 14:42:32 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:07:58.861 14:42:32 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:58.861 14:42:32 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:58.861 14:42:32 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:58.861 14:42:32 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:58.861 14:42:32 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:58.861 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:58.861 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:58.861 14:42:32 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:58.861 14:42:32 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:58.861 14:42:32 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:58.861 14:42:32 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:03.058 14:42:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:03.058 14:42:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:08:03.058 14:42:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:03.058 14:42:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:03.058 14:42:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:03.058 14:42:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:03.058 14:42:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:03.058 14:42:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:08:03.058 14:42:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:03.058 14:42:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:08:03.058 14:42:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:08:03.058 14:42:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:08:03.058 14:42:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:08:03.058 14:42:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:08:03.058 14:42:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:08:03.058 14:42:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:03.058 14:42:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:03.058 14:42:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:03.058 14:42:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:03.058 14:42:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:03.058 14:42:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:03.058 14:42:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:03.058 14:42:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:03.058 14:42:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:03.318 14:42:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:03.318 14:42:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:03.318 14:42:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:03.318 14:42:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:08:03.318 14:42:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:08:03.318 14:42:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:08:03.318 14:42:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:08:03.318 14:42:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:08:03.319 14:42:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:03.319 14:42:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:03.319 14:42:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:08:03.319 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:08:03.319 14:42:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:03.319 14:42:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:03.319 14:42:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:03.319 14:42:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:03.319 14:42:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:03.319 14:42:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:03.319 14:42:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:03.319 14:42:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:08:03.319 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:08:03.319 14:42:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:03.319 14:42:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:03.319 14:42:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:03.319 14:42:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:03.319 14:42:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:03.319 14:42:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:03.319 14:42:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:03.319 14:42:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:08:03.319 14:42:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:03.319 14:42:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:03.319 14:42:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:03.319 14:42:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:03.319 14:42:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:03.319 14:42:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:08:03.319 Found net devices under 0000:da:00.0: mlx_0_0 00:08:03.319 14:42:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:03.319 14:42:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:03.319 14:42:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:03.319 14:42:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:03.319 14:42:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:03.319 14:42:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:03.319 14:42:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:08:03.319 Found net devices under 0000:da:00.1: mlx_0_1 00:08:03.319 14:42:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:03.319 14:42:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:03.319 14:42:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:08:03.319 14:42:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:03.319 14:42:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:08:03.319 14:42:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:08:03.319 14:42:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@420 -- # rdma_device_init 00:08:03.319 14:42:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:08:03.319 14:42:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@58 -- # uname 00:08:03.319 14:42:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:08:03.319 14:42:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@62 -- # modprobe ib_cm 00:08:03.319 14:42:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@63 -- # modprobe ib_core 00:08:03.319 14:42:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@64 -- # modprobe ib_umad 00:08:03.319 14:42:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:08:03.319 14:42:37 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@66 -- # modprobe iw_cm 00:08:03.319 14:42:37 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:08:03.319 14:42:37 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:08:03.319 14:42:37 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@502 -- # allocate_nic_ips 00:08:03.319 14:42:37 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:03.319 14:42:37 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@73 -- # get_rdma_if_list 00:08:03.319 14:42:37 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:03.319 14:42:37 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:03.319 14:42:37 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:03.319 14:42:37 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:03.319 14:42:37 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:03.319 14:42:37 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:03.319 14:42:37 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:03.319 14:42:37 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:03.319 14:42:37 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:03.319 14:42:37 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@105 -- # continue 2 00:08:03.319 14:42:37 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:03.319 14:42:37 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:03.319 14:42:37 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:03.319 14:42:37 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:03.319 14:42:37 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:03.319 14:42:37 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:03.319 14:42:37 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@105 -- # continue 2 00:08:03.319 14:42:37 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:03.319 14:42:37 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:08:03.319 14:42:37 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:03.319 14:42:37 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:03.319 14:42:37 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:03.319 14:42:37 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:03.319 14:42:37 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:08:03.319 14:42:37 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:08:03.319 14:42:37 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:08:03.319 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:03.319 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:08:03.319 altname enp218s0f0np0 00:08:03.319 altname ens818f0np0 00:08:03.319 inet 192.168.100.8/24 scope global mlx_0_0 00:08:03.319 valid_lft forever preferred_lft forever 00:08:03.319 14:42:37 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:03.319 14:42:37 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:08:03.319 14:42:37 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:03.319 14:42:37 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:03.319 14:42:37 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:03.319 14:42:37 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:03.319 14:42:37 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:08:03.319 14:42:37 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:08:03.319 14:42:37 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:08:03.319 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:03.319 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:08:03.319 altname enp218s0f1np1 00:08:03.319 altname ens818f1np1 00:08:03.319 inet 192.168.100.9/24 scope global mlx_0_1 00:08:03.319 valid_lft forever preferred_lft forever 00:08:03.319 14:42:37 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:08:03.319 14:42:37 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:03.319 14:42:37 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:03.319 14:42:37 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:08:03.319 14:42:37 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:08:03.319 14:42:37 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@86 -- # get_rdma_if_list 00:08:03.319 14:42:37 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:03.319 14:42:37 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:03.319 14:42:37 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:03.319 14:42:37 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:03.319 14:42:37 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:03.319 14:42:37 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:03.319 14:42:37 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:03.319 14:42:37 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:03.319 14:42:37 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:03.319 14:42:37 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@105 -- # continue 2 00:08:03.319 14:42:37 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:03.319 14:42:37 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:03.319 14:42:37 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:03.319 14:42:37 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:03.319 14:42:37 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:03.319 14:42:37 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:03.319 14:42:37 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@105 -- # continue 2 00:08:03.319 14:42:37 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:03.319 14:42:37 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:08:03.319 14:42:37 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:03.319 14:42:37 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:03.319 14:42:37 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:03.319 14:42:37 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:03.319 14:42:37 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:03.319 14:42:37 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:08:03.319 14:42:37 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:03.319 14:42:37 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:03.319 14:42:37 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:03.319 14:42:37 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:03.320 14:42:37 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:08:03.320 192.168.100.9' 00:08:03.320 14:42:37 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:08:03.320 192.168.100.9' 00:08:03.320 14:42:37 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@457 -- # head -n 1 00:08:03.320 14:42:37 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:03.320 14:42:37 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:08:03.320 192.168.100.9' 00:08:03.320 14:42:37 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@458 -- # head -n 1 00:08:03.320 14:42:37 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@458 -- # tail -n +2 00:08:03.320 14:42:37 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:03.320 14:42:37 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:08:03.320 14:42:37 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:03.320 14:42:37 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:08:03.320 14:42:37 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:08:03.320 14:42:37 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:08:03.320 14:42:37 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:08:03.320 14:42:37 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:03.320 14:42:37 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:03.320 14:42:37 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:03.320 ************************************ 00:08:03.320 START TEST nvmf_filesystem_no_in_capsule 00:08:03.320 ************************************ 00:08:03.320 14:42:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:08:03.320 14:42:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:08:03.320 14:42:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:03.320 14:42:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:03.320 14:42:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:03.320 14:42:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:03.320 14:42:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2706582 00:08:03.320 14:42:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2706582 00:08:03.320 14:42:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:03.320 14:42:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 2706582 ']' 00:08:03.320 14:42:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:03.320 14:42:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:03.320 14:42:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:03.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:03.320 14:42:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:03.320 14:42:37 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:03.579 [2024-07-15 14:42:37.256524] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:08:03.579 [2024-07-15 14:42:37.256578] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:03.579 EAL: No free 2048 kB hugepages reported on node 1 00:08:03.579 [2024-07-15 14:42:37.314090] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:03.579 [2024-07-15 14:42:37.392066] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:03.579 [2024-07-15 14:42:37.392107] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:03.579 [2024-07-15 14:42:37.392114] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:03.579 [2024-07-15 14:42:37.392120] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:03.579 [2024-07-15 14:42:37.392124] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:03.579 [2024-07-15 14:42:37.392222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:03.579 [2024-07-15 14:42:37.392326] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:03.579 [2024-07-15 14:42:37.392419] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:03.579 [2024-07-15 14:42:37.392420] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.513 14:42:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:04.513 14:42:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:08:04.513 14:42:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:04.513 14:42:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:04.513 14:42:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:04.513 14:42:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:04.513 14:42:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:04.513 14:42:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:08:04.513 14:42:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.513 14:42:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:04.513 [2024-07-15 14:42:38.110470] rdma.c:2731:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:08:04.513 [2024-07-15 14:42:38.130757] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xce9cc0/0xcee1b0) succeed. 00:08:04.513 [2024-07-15 14:42:38.139861] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xceb300/0xd2f840) succeed. 00:08:04.513 14:42:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.513 14:42:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:04.513 14:42:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.513 14:42:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:04.513 Malloc1 00:08:04.513 14:42:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.513 14:42:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:04.513 14:42:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.513 14:42:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:04.513 14:42:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.513 14:42:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:04.513 14:42:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.513 14:42:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:04.513 14:42:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.513 14:42:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:04.513 14:42:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.513 14:42:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:04.513 [2024-07-15 14:42:38.386854] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:04.513 14:42:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.513 14:42:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:04.513 14:42:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:08:04.513 14:42:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:08:04.513 14:42:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:08:04.513 14:42:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:08:04.513 14:42:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:04.513 14:42:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.513 14:42:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:04.513 14:42:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.513 14:42:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:08:04.513 { 00:08:04.513 "name": "Malloc1", 00:08:04.513 "aliases": [ 00:08:04.513 "4ae6c2f1-2363-4d3a-88d1-fe44e292fb16" 00:08:04.513 ], 00:08:04.513 "product_name": "Malloc disk", 00:08:04.513 "block_size": 512, 00:08:04.513 "num_blocks": 1048576, 00:08:04.513 "uuid": "4ae6c2f1-2363-4d3a-88d1-fe44e292fb16", 00:08:04.513 "assigned_rate_limits": { 00:08:04.513 "rw_ios_per_sec": 0, 00:08:04.513 "rw_mbytes_per_sec": 0, 00:08:04.513 "r_mbytes_per_sec": 0, 00:08:04.513 "w_mbytes_per_sec": 0 00:08:04.513 }, 00:08:04.513 "claimed": true, 00:08:04.513 "claim_type": "exclusive_write", 00:08:04.513 "zoned": false, 00:08:04.513 "supported_io_types": { 00:08:04.513 "read": true, 00:08:04.513 "write": true, 00:08:04.513 "unmap": true, 00:08:04.513 "flush": true, 00:08:04.513 "reset": true, 00:08:04.513 "nvme_admin": false, 00:08:04.513 "nvme_io": false, 00:08:04.513 "nvme_io_md": false, 00:08:04.513 "write_zeroes": true, 00:08:04.513 "zcopy": true, 00:08:04.513 "get_zone_info": false, 00:08:04.513 "zone_management": false, 00:08:04.513 "zone_append": false, 00:08:04.514 "compare": false, 00:08:04.514 "compare_and_write": false, 00:08:04.514 "abort": true, 00:08:04.514 "seek_hole": false, 00:08:04.514 "seek_data": false, 00:08:04.514 "copy": true, 00:08:04.514 "nvme_iov_md": false 00:08:04.514 }, 00:08:04.514 "memory_domains": [ 00:08:04.514 { 00:08:04.514 "dma_device_id": "system", 00:08:04.514 "dma_device_type": 1 00:08:04.514 }, 00:08:04.514 { 00:08:04.514 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:04.514 "dma_device_type": 2 00:08:04.514 } 00:08:04.514 ], 00:08:04.514 "driver_specific": {} 00:08:04.514 } 00:08:04.514 ]' 00:08:04.514 14:42:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:08:04.772 14:42:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:08:04.772 14:42:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:08:04.772 14:42:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:08:04.772 14:42:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:08:04.772 14:42:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:08:04.772 14:42:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:04.772 14:42:38 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:05.706 14:42:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:05.706 14:42:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:08:05.706 14:42:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:05.706 14:42:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:05.706 14:42:39 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:08:07.634 14:42:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:07.634 14:42:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:07.634 14:42:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:07.634 14:42:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:07.634 14:42:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:07.634 14:42:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:08:07.634 14:42:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:07.634 14:42:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:07.634 14:42:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:07.938 14:42:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:07.938 14:42:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:07.938 14:42:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:07.938 14:42:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:07.938 14:42:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:07.938 14:42:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:07.938 14:42:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:07.938 14:42:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:07.938 14:42:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:07.938 14:42:41 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:08.872 14:42:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:08:08.872 14:42:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:08.872 14:42:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:08.872 14:42:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:08.872 14:42:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:08.872 ************************************ 00:08:08.872 START TEST filesystem_ext4 00:08:08.872 ************************************ 00:08:08.872 14:42:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:08.872 14:42:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:08.872 14:42:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:08.872 14:42:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:08.872 14:42:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:08:08.872 14:42:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:08.872 14:42:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:08:08.872 14:42:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:08:08.872 14:42:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:08:08.872 14:42:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:08:08.872 14:42:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:08.872 mke2fs 1.46.5 (30-Dec-2021) 00:08:09.131 Discarding device blocks: 0/522240 done 00:08:09.131 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:09.131 Filesystem UUID: 20ca7596-9457-4e4f-a280-e65ad8c86cd4 00:08:09.131 Superblock backups stored on blocks: 00:08:09.131 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:09.131 00:08:09.131 Allocating group tables: 0/64 done 00:08:09.131 Writing inode tables: 0/64 done 00:08:09.131 Creating journal (8192 blocks): done 00:08:09.131 Writing superblocks and filesystem accounting information: 0/64 done 00:08:09.131 00:08:09.131 14:42:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:08:09.131 14:42:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:09.131 14:42:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:09.131 14:42:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:08:09.131 14:42:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:09.131 14:42:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:08:09.131 14:42:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:09.131 14:42:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:09.131 14:42:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2706582 00:08:09.131 14:42:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:09.131 14:42:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:09.131 14:42:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:09.131 14:42:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:09.131 00:08:09.131 real 0m0.173s 00:08:09.131 user 0m0.022s 00:08:09.131 sys 0m0.064s 00:08:09.131 14:42:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:09.131 14:42:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:09.131 ************************************ 00:08:09.131 END TEST filesystem_ext4 00:08:09.131 ************************************ 00:08:09.131 14:42:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:09.131 14:42:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:09.131 14:42:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:09.131 14:42:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:09.131 14:42:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:09.131 ************************************ 00:08:09.131 START TEST filesystem_btrfs 00:08:09.131 ************************************ 00:08:09.131 14:42:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:09.131 14:42:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:09.131 14:42:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:09.131 14:42:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:09.131 14:42:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:08:09.131 14:42:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:09.131 14:42:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:08:09.132 14:42:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:08:09.132 14:42:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:08:09.132 14:42:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:08:09.132 14:42:42 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:09.390 btrfs-progs v6.6.2 00:08:09.390 See https://btrfs.readthedocs.io for more information. 00:08:09.390 00:08:09.390 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:09.390 NOTE: several default settings have changed in version 5.15, please make sure 00:08:09.390 this does not affect your deployments: 00:08:09.390 - DUP for metadata (-m dup) 00:08:09.390 - enabled no-holes (-O no-holes) 00:08:09.390 - enabled free-space-tree (-R free-space-tree) 00:08:09.390 00:08:09.390 Label: (null) 00:08:09.390 UUID: 8d71fc01-afc3-4b37-853e-8aeae351807c 00:08:09.390 Node size: 16384 00:08:09.390 Sector size: 4096 00:08:09.390 Filesystem size: 510.00MiB 00:08:09.390 Block group profiles: 00:08:09.390 Data: single 8.00MiB 00:08:09.391 Metadata: DUP 32.00MiB 00:08:09.391 System: DUP 8.00MiB 00:08:09.391 SSD detected: yes 00:08:09.391 Zoned device: no 00:08:09.391 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:09.391 Runtime features: free-space-tree 00:08:09.391 Checksum: crc32c 00:08:09.391 Number of devices: 1 00:08:09.391 Devices: 00:08:09.391 ID SIZE PATH 00:08:09.391 1 510.00MiB /dev/nvme0n1p1 00:08:09.391 00:08:09.391 14:42:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:08:09.391 14:42:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:09.391 14:42:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:09.391 14:42:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:08:09.391 14:42:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:09.391 14:42:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:08:09.391 14:42:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:09.391 14:42:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:09.391 14:42:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2706582 00:08:09.391 14:42:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:09.391 14:42:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:09.391 14:42:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:09.391 14:42:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:09.391 00:08:09.391 real 0m0.230s 00:08:09.391 user 0m0.014s 00:08:09.391 sys 0m0.122s 00:08:09.391 14:42:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:09.391 14:42:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:09.391 ************************************ 00:08:09.391 END TEST filesystem_btrfs 00:08:09.391 ************************************ 00:08:09.391 14:42:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:09.391 14:42:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:08:09.391 14:42:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:09.391 14:42:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:09.391 14:42:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:09.391 ************************************ 00:08:09.391 START TEST filesystem_xfs 00:08:09.391 ************************************ 00:08:09.391 14:42:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:08:09.391 14:42:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:09.391 14:42:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:09.391 14:42:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:09.391 14:42:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:08:09.391 14:42:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:09.391 14:42:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:08:09.391 14:42:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:08:09.391 14:42:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:08:09.391 14:42:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:08:09.391 14:42:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:09.649 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:09.649 = sectsz=512 attr=2, projid32bit=1 00:08:09.649 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:09.649 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:09.649 data = bsize=4096 blocks=130560, imaxpct=25 00:08:09.649 = sunit=0 swidth=0 blks 00:08:09.649 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:09.649 log =internal log bsize=4096 blocks=16384, version=2 00:08:09.649 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:09.649 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:09.649 Discarding blocks...Done. 00:08:09.649 14:42:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:08:09.649 14:42:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:09.649 14:42:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:09.649 14:42:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:08:09.649 14:42:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:09.649 14:42:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:08:09.649 14:42:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:08:09.649 14:42:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:09.649 14:42:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2706582 00:08:09.649 14:42:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:09.649 14:42:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:09.649 14:42:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:09.649 14:42:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:09.649 00:08:09.649 real 0m0.190s 00:08:09.649 user 0m0.022s 00:08:09.649 sys 0m0.068s 00:08:09.649 14:42:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:09.649 14:42:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:09.649 ************************************ 00:08:09.649 END TEST filesystem_xfs 00:08:09.649 ************************************ 00:08:09.649 14:42:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:09.649 14:42:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:09.649 14:42:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:09.649 14:42:43 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:10.581 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:10.581 14:42:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:10.581 14:42:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:08:10.581 14:42:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:10.581 14:42:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:10.581 14:42:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:10.581 14:42:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:10.581 14:42:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:08:10.581 14:42:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:10.581 14:42:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.581 14:42:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:10.581 14:42:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.581 14:42:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:10.581 14:42:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2706582 00:08:10.581 14:42:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 2706582 ']' 00:08:10.581 14:42:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 2706582 00:08:10.581 14:42:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:08:10.581 14:42:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:10.581 14:42:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2706582 00:08:10.839 14:42:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:10.839 14:42:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:10.839 14:42:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2706582' 00:08:10.839 killing process with pid 2706582 00:08:10.839 14:42:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 2706582 00:08:10.839 14:42:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 2706582 00:08:11.097 14:42:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:11.097 00:08:11.097 real 0m7.712s 00:08:11.097 user 0m30.059s 00:08:11.097 sys 0m1.047s 00:08:11.097 14:42:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:11.098 14:42:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:11.098 ************************************ 00:08:11.098 END TEST nvmf_filesystem_no_in_capsule 00:08:11.098 ************************************ 00:08:11.098 14:42:44 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:08:11.098 14:42:44 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:11.098 14:42:44 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:11.098 14:42:44 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:11.098 14:42:44 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:11.098 ************************************ 00:08:11.098 START TEST nvmf_filesystem_in_capsule 00:08:11.098 ************************************ 00:08:11.098 14:42:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:08:11.098 14:42:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:11.098 14:42:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:11.098 14:42:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:11.098 14:42:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:11.098 14:42:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:11.098 14:42:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2708064 00:08:11.098 14:42:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2708064 00:08:11.098 14:42:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:11.098 14:42:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 2708064 ']' 00:08:11.098 14:42:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:11.098 14:42:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:11.098 14:42:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:11.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:11.098 14:42:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:11.098 14:42:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:11.355 [2024-07-15 14:42:45.054359] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:08:11.355 [2024-07-15 14:42:45.054410] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:11.355 EAL: No free 2048 kB hugepages reported on node 1 00:08:11.355 [2024-07-15 14:42:45.110956] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:11.355 [2024-07-15 14:42:45.190855] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:11.355 [2024-07-15 14:42:45.190894] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:11.355 [2024-07-15 14:42:45.190900] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:11.355 [2024-07-15 14:42:45.190906] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:11.355 [2024-07-15 14:42:45.190911] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:11.355 [2024-07-15 14:42:45.190946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:11.355 [2024-07-15 14:42:45.191047] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:11.355 [2024-07-15 14:42:45.191144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:11.355 [2024-07-15 14:42:45.191145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.290 14:42:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:12.290 14:42:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:08:12.290 14:42:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:12.290 14:42:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:12.290 14:42:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:12.290 14:42:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:12.290 14:42:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:12.290 14:42:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 4096 00:08:12.290 14:42:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.290 14:42:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:12.290 [2024-07-15 14:42:45.903762] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x649cc0/0x64e1b0) succeed. 00:08:12.290 [2024-07-15 14:42:45.912896] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x64b300/0x68f840) succeed. 00:08:12.290 14:42:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.290 14:42:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:12.290 14:42:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.290 14:42:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:12.290 Malloc1 00:08:12.290 14:42:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.290 14:42:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:12.290 14:42:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.290 14:42:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:12.290 14:42:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.290 14:42:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:12.290 14:42:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.290 14:42:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:12.290 14:42:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.290 14:42:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:12.290 14:42:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.290 14:42:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:12.290 [2024-07-15 14:42:46.177779] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:12.290 14:42:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.290 14:42:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:12.290 14:42:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:08:12.290 14:42:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:08:12.290 14:42:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:08:12.290 14:42:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:08:12.290 14:42:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:12.290 14:42:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.290 14:42:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:12.290 14:42:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.290 14:42:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:08:12.290 { 00:08:12.290 "name": "Malloc1", 00:08:12.290 "aliases": [ 00:08:12.290 "bfa7d1a2-3568-4178-b675-c132f590d271" 00:08:12.290 ], 00:08:12.290 "product_name": "Malloc disk", 00:08:12.290 "block_size": 512, 00:08:12.290 "num_blocks": 1048576, 00:08:12.290 "uuid": "bfa7d1a2-3568-4178-b675-c132f590d271", 00:08:12.290 "assigned_rate_limits": { 00:08:12.290 "rw_ios_per_sec": 0, 00:08:12.290 "rw_mbytes_per_sec": 0, 00:08:12.290 "r_mbytes_per_sec": 0, 00:08:12.290 "w_mbytes_per_sec": 0 00:08:12.290 }, 00:08:12.290 "claimed": true, 00:08:12.290 "claim_type": "exclusive_write", 00:08:12.290 "zoned": false, 00:08:12.290 "supported_io_types": { 00:08:12.290 "read": true, 00:08:12.290 "write": true, 00:08:12.290 "unmap": true, 00:08:12.290 "flush": true, 00:08:12.290 "reset": true, 00:08:12.290 "nvme_admin": false, 00:08:12.290 "nvme_io": false, 00:08:12.290 "nvme_io_md": false, 00:08:12.290 "write_zeroes": true, 00:08:12.290 "zcopy": true, 00:08:12.290 "get_zone_info": false, 00:08:12.290 "zone_management": false, 00:08:12.290 "zone_append": false, 00:08:12.290 "compare": false, 00:08:12.290 "compare_and_write": false, 00:08:12.290 "abort": true, 00:08:12.290 "seek_hole": false, 00:08:12.290 "seek_data": false, 00:08:12.290 "copy": true, 00:08:12.290 "nvme_iov_md": false 00:08:12.290 }, 00:08:12.290 "memory_domains": [ 00:08:12.290 { 00:08:12.290 "dma_device_id": "system", 00:08:12.290 "dma_device_type": 1 00:08:12.290 }, 00:08:12.290 { 00:08:12.290 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:12.290 "dma_device_type": 2 00:08:12.290 } 00:08:12.290 ], 00:08:12.290 "driver_specific": {} 00:08:12.290 } 00:08:12.290 ]' 00:08:12.290 14:42:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:08:12.548 14:42:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:08:12.548 14:42:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:08:12.548 14:42:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:08:12.548 14:42:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:08:12.548 14:42:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:08:12.548 14:42:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:12.548 14:42:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:13.483 14:42:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:13.483 14:42:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:08:13.483 14:42:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:13.483 14:42:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:13.483 14:42:47 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:08:15.381 14:42:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:15.381 14:42:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:15.381 14:42:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:15.381 14:42:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:15.381 14:42:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:15.382 14:42:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:08:15.382 14:42:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:15.382 14:42:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:15.382 14:42:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:15.382 14:42:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:15.382 14:42:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:15.382 14:42:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:15.382 14:42:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:15.382 14:42:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:15.382 14:42:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:15.382 14:42:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:15.382 14:42:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:15.639 14:42:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:15.639 14:42:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:16.573 14:42:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:16.573 14:42:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:16.573 14:42:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:16.573 14:42:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:16.573 14:42:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:16.573 ************************************ 00:08:16.573 START TEST filesystem_in_capsule_ext4 00:08:16.573 ************************************ 00:08:16.573 14:42:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:16.573 14:42:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:16.573 14:42:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:16.573 14:42:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:16.573 14:42:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:08:16.573 14:42:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:16.573 14:42:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:08:16.573 14:42:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:08:16.573 14:42:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:08:16.573 14:42:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:08:16.573 14:42:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:16.573 mke2fs 1.46.5 (30-Dec-2021) 00:08:16.831 Discarding device blocks: 0/522240 done 00:08:16.831 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:16.831 Filesystem UUID: 525eb21e-8507-4325-8ac7-dcfb98069d8c 00:08:16.831 Superblock backups stored on blocks: 00:08:16.831 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:16.831 00:08:16.831 Allocating group tables: 0/64 done 00:08:16.831 Writing inode tables: 0/64 done 00:08:16.831 Creating journal (8192 blocks): done 00:08:16.831 Writing superblocks and filesystem accounting information: 0/64 done 00:08:16.831 00:08:16.831 14:42:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:08:16.831 14:42:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:16.831 14:42:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:16.831 14:42:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:08:16.831 14:42:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:16.831 14:42:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:08:16.831 14:42:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:16.831 14:42:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:16.831 14:42:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2708064 00:08:16.831 14:42:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:16.831 14:42:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:16.831 14:42:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:16.831 14:42:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:16.831 00:08:16.831 real 0m0.172s 00:08:16.831 user 0m0.020s 00:08:16.831 sys 0m0.066s 00:08:16.831 14:42:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:16.831 14:42:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:16.831 ************************************ 00:08:16.831 END TEST filesystem_in_capsule_ext4 00:08:16.831 ************************************ 00:08:16.831 14:42:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:16.831 14:42:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:16.831 14:42:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:16.831 14:42:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:16.831 14:42:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:16.831 ************************************ 00:08:16.831 START TEST filesystem_in_capsule_btrfs 00:08:16.831 ************************************ 00:08:16.831 14:42:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:16.831 14:42:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:16.831 14:42:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:16.831 14:42:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:16.831 14:42:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:08:16.831 14:42:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:16.831 14:42:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:08:16.831 14:42:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:08:16.831 14:42:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:08:16.831 14:42:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:08:16.831 14:42:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:17.089 btrfs-progs v6.6.2 00:08:17.089 See https://btrfs.readthedocs.io for more information. 00:08:17.089 00:08:17.089 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:17.089 NOTE: several default settings have changed in version 5.15, please make sure 00:08:17.089 this does not affect your deployments: 00:08:17.089 - DUP for metadata (-m dup) 00:08:17.089 - enabled no-holes (-O no-holes) 00:08:17.089 - enabled free-space-tree (-R free-space-tree) 00:08:17.089 00:08:17.089 Label: (null) 00:08:17.089 UUID: 09688a1d-ff4d-4710-b986-9791aa4ef771 00:08:17.089 Node size: 16384 00:08:17.089 Sector size: 4096 00:08:17.089 Filesystem size: 510.00MiB 00:08:17.089 Block group profiles: 00:08:17.089 Data: single 8.00MiB 00:08:17.089 Metadata: DUP 32.00MiB 00:08:17.090 System: DUP 8.00MiB 00:08:17.090 SSD detected: yes 00:08:17.090 Zoned device: no 00:08:17.090 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:17.090 Runtime features: free-space-tree 00:08:17.090 Checksum: crc32c 00:08:17.090 Number of devices: 1 00:08:17.090 Devices: 00:08:17.090 ID SIZE PATH 00:08:17.090 1 510.00MiB /dev/nvme0n1p1 00:08:17.090 00:08:17.090 14:42:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:08:17.090 14:42:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:17.090 14:42:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:17.090 14:42:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:08:17.090 14:42:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:17.090 14:42:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:08:17.090 14:42:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:17.090 14:42:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:17.090 14:42:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2708064 00:08:17.090 14:42:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:17.090 14:42:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:17.090 14:42:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:17.090 14:42:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:17.090 00:08:17.090 real 0m0.240s 00:08:17.090 user 0m0.035s 00:08:17.090 sys 0m0.112s 00:08:17.090 14:42:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:17.090 14:42:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:17.090 ************************************ 00:08:17.090 END TEST filesystem_in_capsule_btrfs 00:08:17.090 ************************************ 00:08:17.090 14:42:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:17.090 14:42:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:17.090 14:42:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:17.090 14:42:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:17.090 14:42:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:17.348 ************************************ 00:08:17.348 START TEST filesystem_in_capsule_xfs 00:08:17.348 ************************************ 00:08:17.348 14:42:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:08:17.348 14:42:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:17.348 14:42:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:17.348 14:42:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:17.348 14:42:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:08:17.348 14:42:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:17.348 14:42:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:08:17.348 14:42:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:08:17.348 14:42:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:08:17.348 14:42:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:08:17.348 14:42:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:17.348 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:17.349 = sectsz=512 attr=2, projid32bit=1 00:08:17.349 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:17.349 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:17.349 data = bsize=4096 blocks=130560, imaxpct=25 00:08:17.349 = sunit=0 swidth=0 blks 00:08:17.349 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:17.349 log =internal log bsize=4096 blocks=16384, version=2 00:08:17.349 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:17.349 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:17.349 Discarding blocks...Done. 00:08:17.349 14:42:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:08:17.349 14:42:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:17.349 14:42:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:17.349 14:42:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:08:17.349 14:42:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:17.349 14:42:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:08:17.349 14:42:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:08:17.349 14:42:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:17.349 14:42:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2708064 00:08:17.349 14:42:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:17.349 14:42:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:17.349 14:42:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:17.349 14:42:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:17.349 00:08:17.349 real 0m0.190s 00:08:17.349 user 0m0.018s 00:08:17.349 sys 0m0.071s 00:08:17.349 14:42:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:17.349 14:42:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:17.349 ************************************ 00:08:17.349 END TEST filesystem_in_capsule_xfs 00:08:17.349 ************************************ 00:08:17.349 14:42:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:17.349 14:42:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:17.349 14:42:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:17.349 14:42:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:18.283 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:18.283 14:42:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:18.283 14:42:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:08:18.283 14:42:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:18.283 14:42:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:18.541 14:42:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:18.541 14:42:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:18.541 14:42:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:08:18.541 14:42:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:18.541 14:42:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.541 14:42:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:18.541 14:42:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.541 14:42:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:18.541 14:42:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2708064 00:08:18.541 14:42:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 2708064 ']' 00:08:18.541 14:42:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 2708064 00:08:18.541 14:42:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:08:18.541 14:42:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:18.541 14:42:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2708064 00:08:18.541 14:42:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:18.541 14:42:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:18.541 14:42:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2708064' 00:08:18.541 killing process with pid 2708064 00:08:18.541 14:42:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 2708064 00:08:18.541 14:42:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 2708064 00:08:18.800 14:42:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:18.800 00:08:18.800 real 0m7.699s 00:08:18.800 user 0m29.919s 00:08:18.800 sys 0m1.056s 00:08:18.800 14:42:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:18.800 14:42:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:18.800 ************************************ 00:08:18.800 END TEST nvmf_filesystem_in_capsule 00:08:18.800 ************************************ 00:08:19.058 14:42:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:08:19.058 14:42:52 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:08:19.058 14:42:52 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:19.058 14:42:52 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:08:19.058 14:42:52 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:08:19.058 14:42:52 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:08:19.058 14:42:52 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:08:19.058 14:42:52 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:19.058 14:42:52 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:08:19.058 rmmod nvme_rdma 00:08:19.058 rmmod nvme_fabrics 00:08:19.058 14:42:52 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:19.058 14:42:52 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:08:19.058 14:42:52 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:08:19.058 14:42:52 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:08:19.058 14:42:52 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:19.058 14:42:52 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:08:19.058 00:08:19.058 real 0m20.563s 00:08:19.058 user 1m1.333s 00:08:19.058 sys 0m5.890s 00:08:19.058 14:42:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:19.058 14:42:52 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:19.058 ************************************ 00:08:19.058 END TEST nvmf_filesystem 00:08:19.058 ************************************ 00:08:19.058 14:42:52 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:08:19.058 14:42:52 nvmf_rdma -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:08:19.058 14:42:52 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:19.058 14:42:52 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:19.058 14:42:52 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:08:19.058 ************************************ 00:08:19.058 START TEST nvmf_target_discovery 00:08:19.058 ************************************ 00:08:19.058 14:42:52 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:08:19.058 * Looking for test storage... 00:08:19.058 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:19.058 14:42:52 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:19.058 14:42:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:08:19.058 14:42:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:19.058 14:42:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:19.058 14:42:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:19.058 14:42:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:19.058 14:42:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:19.058 14:42:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:19.058 14:42:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:19.058 14:42:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:19.058 14:42:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:19.058 14:42:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:19.058 14:42:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:08:19.058 14:42:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:08:19.058 14:42:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:19.058 14:42:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:19.058 14:42:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:19.058 14:42:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:19.058 14:42:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:19.058 14:42:52 nvmf_rdma.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:19.058 14:42:52 nvmf_rdma.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:19.058 14:42:52 nvmf_rdma.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:19.058 14:42:52 nvmf_rdma.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.059 14:42:52 nvmf_rdma.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.059 14:42:52 nvmf_rdma.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.059 14:42:52 nvmf_rdma.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:08:19.059 14:42:52 nvmf_rdma.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.059 14:42:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:08:19.059 14:42:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:19.059 14:42:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:19.059 14:42:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:19.059 14:42:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:19.059 14:42:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:19.059 14:42:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:19.059 14:42:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:19.059 14:42:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:19.059 14:42:52 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:19.059 14:42:52 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:19.059 14:42:52 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:19.059 14:42:52 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:08:19.059 14:42:52 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:08:19.059 14:42:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:08:19.059 14:42:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:19.059 14:42:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:19.059 14:42:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:19.059 14:42:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:19.059 14:42:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:19.059 14:42:52 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:19.059 14:42:52 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:19.059 14:42:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:19.059 14:42:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:19.059 14:42:52 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:08:19.059 14:42:52 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.321 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:24.321 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:08:24.321 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:24.321 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:24.321 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:24.321 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:24.321 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:24.321 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:08:24.321 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:24.321 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:08:24.321 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:08:24.321 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:08:24.321 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:08:24.321 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:08:24.321 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:08:24.321 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:24.321 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:24.321 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:24.321 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:24.321 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:24.321 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:24.321 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:24.321 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:24.321 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:24.321 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:24.321 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:24.321 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:24.321 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:08:24.321 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:08:24.321 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:08:24.321 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:08:24.321 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:08:24.321 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:24.321 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:24.321 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:08:24.321 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:08:24.321 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:24.321 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:24.321 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:24.321 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:24.321 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:24.321 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:24.321 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:24.321 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:08:24.321 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:08:24.321 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:24.321 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:24.321 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:24.321 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:24.322 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:24.322 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:24.322 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:24.322 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:08:24.322 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:24.322 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:24.322 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:24.322 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:24.322 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:24.322 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:08:24.322 Found net devices under 0000:da:00.0: mlx_0_0 00:08:24.322 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:24.322 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:24.322 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:24.322 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:24.322 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:24.322 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:24.322 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:08:24.322 Found net devices under 0000:da:00.1: mlx_0_1 00:08:24.322 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:24.322 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:24.322 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:08:24.322 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:24.322 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:08:24.322 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:08:24.322 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@420 -- # rdma_device_init 00:08:24.322 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:08:24.322 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@58 -- # uname 00:08:24.322 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:08:24.322 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@62 -- # modprobe ib_cm 00:08:24.322 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@63 -- # modprobe ib_core 00:08:24.322 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@64 -- # modprobe ib_umad 00:08:24.322 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:08:24.322 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@66 -- # modprobe iw_cm 00:08:24.322 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:08:24.322 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:08:24.322 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@502 -- # allocate_nic_ips 00:08:24.322 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:24.322 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@73 -- # get_rdma_if_list 00:08:24.322 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:24.322 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:24.322 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:24.322 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:24.322 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:24.322 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:24.322 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:24.322 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:24.322 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:24.322 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@105 -- # continue 2 00:08:24.322 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:24.322 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:24.322 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:24.322 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:24.322 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:24.322 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:24.322 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@105 -- # continue 2 00:08:24.322 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:24.322 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:08:24.322 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:24.322 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:24.322 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:24.322 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:24.322 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:08:24.322 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:08:24.322 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:08:24.322 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:24.322 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:08:24.322 altname enp218s0f0np0 00:08:24.322 altname ens818f0np0 00:08:24.322 inet 192.168.100.8/24 scope global mlx_0_0 00:08:24.322 valid_lft forever preferred_lft forever 00:08:24.322 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:24.322 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:08:24.322 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:24.322 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:24.322 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:24.322 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:24.322 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:08:24.322 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:08:24.322 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:08:24.322 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:24.322 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:08:24.322 altname enp218s0f1np1 00:08:24.322 altname ens818f1np1 00:08:24.322 inet 192.168.100.9/24 scope global mlx_0_1 00:08:24.322 valid_lft forever preferred_lft forever 00:08:24.322 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:08:24.322 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:24.322 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:24.322 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:08:24.322 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:08:24.322 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@86 -- # get_rdma_if_list 00:08:24.322 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:24.322 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:24.322 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:24.323 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:24.323 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:24.323 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:24.323 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:24.323 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:24.323 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:24.323 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@105 -- # continue 2 00:08:24.580 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:24.580 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:24.580 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:24.580 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:24.580 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:24.580 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:24.580 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@105 -- # continue 2 00:08:24.580 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:24.580 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:08:24.580 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:24.580 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:24.580 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:24.580 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:24.580 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:24.580 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:08:24.580 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:24.580 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:24.580 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:24.580 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:24.580 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:08:24.580 192.168.100.9' 00:08:24.580 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:08:24.580 192.168.100.9' 00:08:24.580 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@457 -- # head -n 1 00:08:24.580 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:24.580 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:08:24.580 192.168.100.9' 00:08:24.580 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@458 -- # tail -n +2 00:08:24.580 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@458 -- # head -n 1 00:08:24.580 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:24.580 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:08:24.580 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:24.580 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:08:24.580 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:08:24.580 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:08:24.580 14:42:58 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:24.580 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:24.580 14:42:58 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:24.580 14:42:58 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.580 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=2712600 00:08:24.580 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 2712600 00:08:24.580 14:42:58 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:24.580 14:42:58 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 2712600 ']' 00:08:24.580 14:42:58 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:24.580 14:42:58 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:24.580 14:42:58 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:24.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:24.580 14:42:58 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:24.580 14:42:58 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.580 [2024-07-15 14:42:58.349109] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:08:24.580 [2024-07-15 14:42:58.349161] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:24.580 EAL: No free 2048 kB hugepages reported on node 1 00:08:24.580 [2024-07-15 14:42:58.406622] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:24.580 [2024-07-15 14:42:58.485557] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:24.580 [2024-07-15 14:42:58.485596] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:24.580 [2024-07-15 14:42:58.485603] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:24.580 [2024-07-15 14:42:58.485609] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:24.580 [2024-07-15 14:42:58.485614] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:24.580 [2024-07-15 14:42:58.485675] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:24.580 [2024-07-15 14:42:58.485761] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:24.580 [2024-07-15 14:42:58.485869] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:24.580 [2024-07-15 14:42:58.485871] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.512 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:25.512 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:08:25.512 14:42:59 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:25.512 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:25.512 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:25.512 14:42:59 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:25.512 14:42:59 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:25.512 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.512 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:25.512 [2024-07-15 14:42:59.225181] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1120cc0/0x11251b0) succeed. 00:08:25.512 [2024-07-15 14:42:59.234329] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1122300/0x1166840) succeed. 00:08:25.512 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.512 14:42:59 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:08:25.512 14:42:59 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:25.512 14:42:59 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:25.512 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.512 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:25.512 Null1 00:08:25.512 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.512 14:42:59 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:25.512 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.512 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:25.512 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.512 14:42:59 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:25.512 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.512 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:25.512 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.512 14:42:59 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:25.512 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.512 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:25.512 [2024-07-15 14:42:59.394327] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:25.512 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.512 14:42:59 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:25.512 14:42:59 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:25.512 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.512 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:25.512 Null2 00:08:25.512 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.512 14:42:59 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:25.512 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.512 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:25.512 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.512 14:42:59 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:25.512 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.512 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:25.512 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.513 14:42:59 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:08:25.513 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.513 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:25.513 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.513 14:42:59 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:25.513 14:42:59 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:25.513 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.771 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:25.771 Null3 00:08:25.771 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.771 14:42:59 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:25.771 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.771 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:25.771 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.771 14:42:59 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:25.771 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.771 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:25.771 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.771 14:42:59 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:08:25.771 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.771 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:25.771 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.771 14:42:59 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:25.771 14:42:59 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:25.771 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.771 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:25.771 Null4 00:08:25.771 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.771 14:42:59 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:25.771 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.771 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:25.771 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.771 14:42:59 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:25.771 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.771 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:25.771 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.771 14:42:59 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:08:25.771 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.771 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:25.771 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.771 14:42:59 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:08:25.771 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.771 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:25.771 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.771 14:42:59 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 192.168.100.8 -s 4430 00:08:25.771 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.771 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:25.771 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.771 14:42:59 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 4420 00:08:25.771 00:08:25.771 Discovery Log Number of Records 6, Generation counter 6 00:08:25.771 =====Discovery Log Entry 0====== 00:08:25.771 trtype: rdma 00:08:25.771 adrfam: ipv4 00:08:25.771 subtype: current discovery subsystem 00:08:25.771 treq: not required 00:08:25.771 portid: 0 00:08:25.771 trsvcid: 4420 00:08:25.772 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:25.772 traddr: 192.168.100.8 00:08:25.772 eflags: explicit discovery connections, duplicate discovery information 00:08:25.772 rdma_prtype: not specified 00:08:25.772 rdma_qptype: connected 00:08:25.772 rdma_cms: rdma-cm 00:08:25.772 rdma_pkey: 0x0000 00:08:25.772 =====Discovery Log Entry 1====== 00:08:25.772 trtype: rdma 00:08:25.772 adrfam: ipv4 00:08:25.772 subtype: nvme subsystem 00:08:25.772 treq: not required 00:08:25.772 portid: 0 00:08:25.772 trsvcid: 4420 00:08:25.772 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:25.772 traddr: 192.168.100.8 00:08:25.772 eflags: none 00:08:25.772 rdma_prtype: not specified 00:08:25.772 rdma_qptype: connected 00:08:25.772 rdma_cms: rdma-cm 00:08:25.772 rdma_pkey: 0x0000 00:08:25.772 =====Discovery Log Entry 2====== 00:08:25.772 trtype: rdma 00:08:25.772 adrfam: ipv4 00:08:25.772 subtype: nvme subsystem 00:08:25.772 treq: not required 00:08:25.772 portid: 0 00:08:25.772 trsvcid: 4420 00:08:25.772 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:25.772 traddr: 192.168.100.8 00:08:25.772 eflags: none 00:08:25.772 rdma_prtype: not specified 00:08:25.772 rdma_qptype: connected 00:08:25.772 rdma_cms: rdma-cm 00:08:25.772 rdma_pkey: 0x0000 00:08:25.772 =====Discovery Log Entry 3====== 00:08:25.772 trtype: rdma 00:08:25.772 adrfam: ipv4 00:08:25.772 subtype: nvme subsystem 00:08:25.772 treq: not required 00:08:25.772 portid: 0 00:08:25.772 trsvcid: 4420 00:08:25.772 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:25.772 traddr: 192.168.100.8 00:08:25.772 eflags: none 00:08:25.772 rdma_prtype: not specified 00:08:25.772 rdma_qptype: connected 00:08:25.772 rdma_cms: rdma-cm 00:08:25.772 rdma_pkey: 0x0000 00:08:25.772 =====Discovery Log Entry 4====== 00:08:25.772 trtype: rdma 00:08:25.772 adrfam: ipv4 00:08:25.772 subtype: nvme subsystem 00:08:25.772 treq: not required 00:08:25.772 portid: 0 00:08:25.772 trsvcid: 4420 00:08:25.772 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:25.772 traddr: 192.168.100.8 00:08:25.772 eflags: none 00:08:25.772 rdma_prtype: not specified 00:08:25.772 rdma_qptype: connected 00:08:25.772 rdma_cms: rdma-cm 00:08:25.772 rdma_pkey: 0x0000 00:08:25.772 =====Discovery Log Entry 5====== 00:08:25.772 trtype: rdma 00:08:25.772 adrfam: ipv4 00:08:25.772 subtype: discovery subsystem referral 00:08:25.772 treq: not required 00:08:25.772 portid: 0 00:08:25.772 trsvcid: 4430 00:08:25.772 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:25.772 traddr: 192.168.100.8 00:08:25.772 eflags: none 00:08:25.772 rdma_prtype: unrecognized 00:08:25.772 rdma_qptype: unrecognized 00:08:25.772 rdma_cms: unrecognized 00:08:25.772 rdma_pkey: 0x0000 00:08:25.772 14:42:59 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:25.772 Perform nvmf subsystem discovery via RPC 00:08:25.772 14:42:59 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:25.772 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.772 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:25.772 [ 00:08:25.772 { 00:08:25.772 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:25.772 "subtype": "Discovery", 00:08:25.772 "listen_addresses": [ 00:08:25.772 { 00:08:25.772 "trtype": "RDMA", 00:08:25.772 "adrfam": "IPv4", 00:08:25.772 "traddr": "192.168.100.8", 00:08:25.772 "trsvcid": "4420" 00:08:25.772 } 00:08:25.772 ], 00:08:25.772 "allow_any_host": true, 00:08:25.772 "hosts": [] 00:08:25.772 }, 00:08:25.772 { 00:08:25.772 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:25.772 "subtype": "NVMe", 00:08:25.772 "listen_addresses": [ 00:08:25.772 { 00:08:25.772 "trtype": "RDMA", 00:08:25.772 "adrfam": "IPv4", 00:08:25.772 "traddr": "192.168.100.8", 00:08:25.772 "trsvcid": "4420" 00:08:25.772 } 00:08:25.772 ], 00:08:25.772 "allow_any_host": true, 00:08:25.772 "hosts": [], 00:08:25.772 "serial_number": "SPDK00000000000001", 00:08:25.772 "model_number": "SPDK bdev Controller", 00:08:25.772 "max_namespaces": 32, 00:08:25.772 "min_cntlid": 1, 00:08:25.772 "max_cntlid": 65519, 00:08:25.772 "namespaces": [ 00:08:25.772 { 00:08:25.772 "nsid": 1, 00:08:25.772 "bdev_name": "Null1", 00:08:25.772 "name": "Null1", 00:08:25.772 "nguid": "AF814E79D8B84DE1B15E72EC7201488F", 00:08:25.772 "uuid": "af814e79-d8b8-4de1-b15e-72ec7201488f" 00:08:25.772 } 00:08:25.772 ] 00:08:25.772 }, 00:08:25.772 { 00:08:25.772 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:25.772 "subtype": "NVMe", 00:08:25.772 "listen_addresses": [ 00:08:25.772 { 00:08:25.772 "trtype": "RDMA", 00:08:25.772 "adrfam": "IPv4", 00:08:25.772 "traddr": "192.168.100.8", 00:08:25.772 "trsvcid": "4420" 00:08:25.772 } 00:08:25.772 ], 00:08:25.772 "allow_any_host": true, 00:08:25.772 "hosts": [], 00:08:25.772 "serial_number": "SPDK00000000000002", 00:08:25.772 "model_number": "SPDK bdev Controller", 00:08:25.772 "max_namespaces": 32, 00:08:25.772 "min_cntlid": 1, 00:08:25.772 "max_cntlid": 65519, 00:08:25.772 "namespaces": [ 00:08:25.772 { 00:08:25.772 "nsid": 1, 00:08:25.772 "bdev_name": "Null2", 00:08:25.772 "name": "Null2", 00:08:25.772 "nguid": "7FD6BF5CEBA84E2A9EBA1FF1E2053115", 00:08:25.772 "uuid": "7fd6bf5c-eba8-4e2a-9eba-1ff1e2053115" 00:08:25.772 } 00:08:25.772 ] 00:08:25.772 }, 00:08:25.772 { 00:08:25.772 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:25.772 "subtype": "NVMe", 00:08:25.772 "listen_addresses": [ 00:08:25.772 { 00:08:25.772 "trtype": "RDMA", 00:08:25.772 "adrfam": "IPv4", 00:08:25.772 "traddr": "192.168.100.8", 00:08:25.772 "trsvcid": "4420" 00:08:25.772 } 00:08:25.772 ], 00:08:25.772 "allow_any_host": true, 00:08:25.772 "hosts": [], 00:08:25.772 "serial_number": "SPDK00000000000003", 00:08:25.772 "model_number": "SPDK bdev Controller", 00:08:25.772 "max_namespaces": 32, 00:08:25.772 "min_cntlid": 1, 00:08:25.772 "max_cntlid": 65519, 00:08:25.772 "namespaces": [ 00:08:25.772 { 00:08:25.772 "nsid": 1, 00:08:25.772 "bdev_name": "Null3", 00:08:25.772 "name": "Null3", 00:08:25.772 "nguid": "6981E8C1E9F94CED8E417E9466DE865C", 00:08:25.772 "uuid": "6981e8c1-e9f9-4ced-8e41-7e9466de865c" 00:08:25.772 } 00:08:25.772 ] 00:08:25.772 }, 00:08:25.772 { 00:08:25.772 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:25.772 "subtype": "NVMe", 00:08:25.772 "listen_addresses": [ 00:08:25.772 { 00:08:25.772 "trtype": "RDMA", 00:08:25.772 "adrfam": "IPv4", 00:08:25.772 "traddr": "192.168.100.8", 00:08:25.772 "trsvcid": "4420" 00:08:25.772 } 00:08:25.772 ], 00:08:25.772 "allow_any_host": true, 00:08:25.772 "hosts": [], 00:08:25.772 "serial_number": "SPDK00000000000004", 00:08:25.772 "model_number": "SPDK bdev Controller", 00:08:25.772 "max_namespaces": 32, 00:08:25.772 "min_cntlid": 1, 00:08:25.772 "max_cntlid": 65519, 00:08:25.772 "namespaces": [ 00:08:25.772 { 00:08:25.772 "nsid": 1, 00:08:25.772 "bdev_name": "Null4", 00:08:25.772 "name": "Null4", 00:08:25.772 "nguid": "06977C69D0A043C7ADFC662DDBACFBBE", 00:08:25.772 "uuid": "06977c69-d0a0-43c7-adfc-662ddbacfbbe" 00:08:25.772 } 00:08:25.772 ] 00:08:25.772 } 00:08:25.772 ] 00:08:25.772 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.772 14:42:59 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:08:25.772 14:42:59 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:25.772 14:42:59 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:25.772 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.772 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:25.772 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.772 14:42:59 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:25.772 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.772 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:25.772 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.772 14:42:59 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:25.772 14:42:59 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:25.772 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.772 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:25.772 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.772 14:42:59 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:25.772 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.772 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:25.772 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.772 14:42:59 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:25.772 14:42:59 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:25.772 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.772 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:25.772 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.772 14:42:59 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:25.772 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.772 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:25.772 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.772 14:42:59 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:25.772 14:42:59 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:25.772 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.773 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:25.773 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.773 14:42:59 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:25.773 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.773 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:26.030 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.030 14:42:59 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 192.168.100.8 -s 4430 00:08:26.030 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.030 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:26.030 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.030 14:42:59 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:26.030 14:42:59 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:26.030 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.031 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:26.031 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.031 14:42:59 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:08:26.031 14:42:59 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:26.031 14:42:59 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:26.031 14:42:59 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:08:26.031 14:42:59 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:26.031 14:42:59 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:08:26.031 14:42:59 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:08:26.031 14:42:59 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:08:26.031 14:42:59 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:08:26.031 14:42:59 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:26.031 14:42:59 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:08:26.031 rmmod nvme_rdma 00:08:26.031 rmmod nvme_fabrics 00:08:26.031 14:42:59 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:26.031 14:42:59 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:08:26.031 14:42:59 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:08:26.031 14:42:59 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 2712600 ']' 00:08:26.031 14:42:59 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 2712600 00:08:26.031 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 2712600 ']' 00:08:26.031 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 2712600 00:08:26.031 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:08:26.031 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:26.031 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2712600 00:08:26.031 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:26.031 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:26.031 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2712600' 00:08:26.031 killing process with pid 2712600 00:08:26.031 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 2712600 00:08:26.031 14:42:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 2712600 00:08:26.289 14:43:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:26.289 14:43:00 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:08:26.289 00:08:26.289 real 0m7.249s 00:08:26.289 user 0m7.958s 00:08:26.289 sys 0m4.403s 00:08:26.289 14:43:00 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:26.289 14:43:00 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:26.289 ************************************ 00:08:26.289 END TEST nvmf_target_discovery 00:08:26.289 ************************************ 00:08:26.289 14:43:00 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:08:26.289 14:43:00 nvmf_rdma -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:08:26.289 14:43:00 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:26.289 14:43:00 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:26.289 14:43:00 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:08:26.289 ************************************ 00:08:26.289 START TEST nvmf_referrals 00:08:26.289 ************************************ 00:08:26.289 14:43:00 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:08:26.547 * Looking for test storage... 00:08:26.547 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:26.547 14:43:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:26.547 14:43:00 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:08:26.547 14:43:00 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:26.547 14:43:00 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:26.547 14:43:00 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:26.547 14:43:00 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:26.547 14:43:00 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:26.547 14:43:00 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:26.547 14:43:00 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:26.547 14:43:00 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:26.547 14:43:00 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:26.547 14:43:00 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:26.547 14:43:00 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:08:26.547 14:43:00 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:08:26.547 14:43:00 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:26.547 14:43:00 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:26.547 14:43:00 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:26.547 14:43:00 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:26.547 14:43:00 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:26.547 14:43:00 nvmf_rdma.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:26.547 14:43:00 nvmf_rdma.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:26.547 14:43:00 nvmf_rdma.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:26.548 14:43:00 nvmf_rdma.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.548 14:43:00 nvmf_rdma.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.548 14:43:00 nvmf_rdma.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.548 14:43:00 nvmf_rdma.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:08:26.548 14:43:00 nvmf_rdma.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.548 14:43:00 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:08:26.548 14:43:00 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:26.548 14:43:00 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:26.548 14:43:00 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:26.548 14:43:00 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:26.548 14:43:00 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:26.548 14:43:00 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:26.548 14:43:00 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:26.548 14:43:00 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:26.548 14:43:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:26.548 14:43:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:26.548 14:43:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:26.548 14:43:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:26.548 14:43:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:26.548 14:43:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:26.548 14:43:00 nvmf_rdma.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:08:26.548 14:43:00 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:08:26.548 14:43:00 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:26.548 14:43:00 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:26.548 14:43:00 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:26.548 14:43:00 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:26.548 14:43:00 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:26.548 14:43:00 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:26.548 14:43:00 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:26.548 14:43:00 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:26.548 14:43:00 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:26.548 14:43:00 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:08:26.548 14:43:00 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:31.810 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:31.810 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:08:31.810 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:31.810 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:31.810 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:31.810 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:31.810 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:31.810 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:08:31.810 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:31.810 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:08:31.810 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:08:31.810 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:08:31.810 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:08:31.810 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:08:31.810 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:08:31.810 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:31.810 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:31.810 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:31.810 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:31.810 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:31.810 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:31.810 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:31.810 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:31.810 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:31.810 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:31.810 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:31.810 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:31.810 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:08:31.810 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:08:31.810 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:08:31.810 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:08:31.810 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:08:31.810 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:31.810 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:31.810 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:08:31.810 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:08:31.810 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:31.810 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:31.810 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:31.810 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:31.810 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:31.810 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:31.810 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:31.810 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:08:31.810 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:08:31.810 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:31.810 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:31.810 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:31.810 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:31.810 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:31.810 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:31.810 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:31.810 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:08:31.810 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:31.810 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:31.810 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:31.810 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:31.810 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:31.810 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:08:31.810 Found net devices under 0000:da:00.0: mlx_0_0 00:08:31.810 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:08:31.811 Found net devices under 0000:da:00.1: mlx_0_1 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@420 -- # rdma_device_init 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@58 -- # uname 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@62 -- # modprobe ib_cm 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@63 -- # modprobe ib_core 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@64 -- # modprobe ib_umad 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@66 -- # modprobe iw_cm 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@502 -- # allocate_nic_ips 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@73 -- # get_rdma_if_list 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@105 -- # continue 2 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@105 -- # continue 2 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:08:31.811 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:31.811 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:08:31.811 altname enp218s0f0np0 00:08:31.811 altname ens818f0np0 00:08:31.811 inet 192.168.100.8/24 scope global mlx_0_0 00:08:31.811 valid_lft forever preferred_lft forever 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:08:31.811 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:31.811 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:08:31.811 altname enp218s0f1np1 00:08:31.811 altname ens818f1np1 00:08:31.811 inet 192.168.100.9/24 scope global mlx_0_1 00:08:31.811 valid_lft forever preferred_lft forever 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@86 -- # get_rdma_if_list 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@105 -- # continue 2 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@105 -- # continue 2 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:08:31.811 192.168.100.9' 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:08:31.811 192.168.100.9' 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@457 -- # head -n 1 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:08:31.811 192.168.100.9' 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@458 -- # tail -n +2 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@458 -- # head -n 1 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=2715924 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 2715924 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 2715924 ']' 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:31.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:31.811 14:43:05 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:31.812 [2024-07-15 14:43:05.722206] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:08:31.812 [2024-07-15 14:43:05.722261] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:32.070 EAL: No free 2048 kB hugepages reported on node 1 00:08:32.070 [2024-07-15 14:43:05.781513] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:32.070 [2024-07-15 14:43:05.860333] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:32.070 [2024-07-15 14:43:05.860377] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:32.070 [2024-07-15 14:43:05.860383] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:32.070 [2024-07-15 14:43:05.860389] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:32.070 [2024-07-15 14:43:05.860393] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:32.070 [2024-07-15 14:43:05.860463] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:32.070 [2024-07-15 14:43:05.860567] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:32.070 [2024-07-15 14:43:05.860625] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:32.070 [2024-07-15 14:43:05.860626] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.636 14:43:06 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:32.636 14:43:06 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:08:32.636 14:43:06 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:32.636 14:43:06 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:32.636 14:43:06 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:32.894 14:43:06 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:32.894 14:43:06 nvmf_rdma.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:32.894 14:43:06 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.894 14:43:06 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:32.894 [2024-07-15 14:43:06.593607] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1c9bcc0/0x1ca01b0) succeed. 00:08:32.894 [2024-07-15 14:43:06.602812] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1c9d300/0x1ce1840) succeed. 00:08:32.894 14:43:06 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.894 14:43:06 nvmf_rdma.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t rdma -a 192.168.100.8 -s 8009 discovery 00:08:32.894 14:43:06 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.894 14:43:06 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:32.894 [2024-07-15 14:43:06.726588] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 8009 *** 00:08:32.894 14:43:06 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.894 14:43:06 nvmf_rdma.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 00:08:32.894 14:43:06 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.894 14:43:06 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:32.894 14:43:06 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.894 14:43:06 nvmf_rdma.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.3 -s 4430 00:08:32.894 14:43:06 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.894 14:43:06 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:32.894 14:43:06 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.894 14:43:06 nvmf_rdma.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.4 -s 4430 00:08:32.894 14:43:06 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.894 14:43:06 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:32.894 14:43:06 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.894 14:43:06 nvmf_rdma.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:32.894 14:43:06 nvmf_rdma.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:08:32.894 14:43:06 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.894 14:43:06 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:32.894 14:43:06 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.894 14:43:06 nvmf_rdma.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:32.894 14:43:06 nvmf_rdma.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:32.894 14:43:06 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:32.894 14:43:06 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:32.894 14:43:06 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:32.894 14:43:06 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.894 14:43:06 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:32.894 14:43:06 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:33.151 14:43:06 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.151 14:43:06 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:33.151 14:43:06 nvmf_rdma.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:33.151 14:43:06 nvmf_rdma.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:33.151 14:43:06 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:33.151 14:43:06 nvmf_rdma.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:33.151 14:43:06 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:33.151 14:43:06 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:33.151 14:43:06 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:33.151 14:43:06 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:33.151 14:43:06 nvmf_rdma.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:33.151 14:43:06 nvmf_rdma.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 00:08:33.151 14:43:06 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.151 14:43:06 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:33.151 14:43:06 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.151 14:43:06 nvmf_rdma.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.3 -s 4430 00:08:33.151 14:43:06 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.151 14:43:06 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:33.151 14:43:06 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.151 14:43:06 nvmf_rdma.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.4 -s 4430 00:08:33.151 14:43:06 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.151 14:43:06 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:33.151 14:43:06 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.151 14:43:06 nvmf_rdma.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:33.151 14:43:06 nvmf_rdma.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:08:33.151 14:43:06 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.151 14:43:06 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:33.151 14:43:06 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.151 14:43:07 nvmf_rdma.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:33.151 14:43:07 nvmf_rdma.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:33.151 14:43:07 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:33.151 14:43:07 nvmf_rdma.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:33.151 14:43:07 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:33.151 14:43:07 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:33.151 14:43:07 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:33.408 14:43:07 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:33.408 14:43:07 nvmf_rdma.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:33.408 14:43:07 nvmf_rdma.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n discovery 00:08:33.408 14:43:07 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.408 14:43:07 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:33.408 14:43:07 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.408 14:43:07 nvmf_rdma.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:33.408 14:43:07 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.408 14:43:07 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:33.408 14:43:07 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.408 14:43:07 nvmf_rdma.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:33.408 14:43:07 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:33.408 14:43:07 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:33.408 14:43:07 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:33.408 14:43:07 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.408 14:43:07 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:33.408 14:43:07 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:33.408 14:43:07 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.408 14:43:07 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:33.408 14:43:07 nvmf_rdma.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:33.408 14:43:07 nvmf_rdma.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:33.408 14:43:07 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:33.408 14:43:07 nvmf_rdma.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:33.408 14:43:07 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:33.408 14:43:07 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:33.408 14:43:07 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:33.408 14:43:07 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:33.408 14:43:07 nvmf_rdma.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:33.408 14:43:07 nvmf_rdma.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:33.408 14:43:07 nvmf_rdma.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:33.408 14:43:07 nvmf_rdma.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:33.409 14:43:07 nvmf_rdma.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:33.409 14:43:07 nvmf_rdma.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:33.665 14:43:07 nvmf_rdma.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:33.665 14:43:07 nvmf_rdma.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:33.665 14:43:07 nvmf_rdma.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:33.665 14:43:07 nvmf_rdma.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:33.665 14:43:07 nvmf_rdma.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:33.665 14:43:07 nvmf_rdma.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:33.665 14:43:07 nvmf_rdma.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:33.665 14:43:07 nvmf_rdma.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:33.665 14:43:07 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.665 14:43:07 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:33.665 14:43:07 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.665 14:43:07 nvmf_rdma.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:33.665 14:43:07 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:33.665 14:43:07 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:33.665 14:43:07 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:33.665 14:43:07 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.665 14:43:07 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:33.665 14:43:07 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:33.665 14:43:07 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.665 14:43:07 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:33.665 14:43:07 nvmf_rdma.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:33.665 14:43:07 nvmf_rdma.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:33.665 14:43:07 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:33.665 14:43:07 nvmf_rdma.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:33.665 14:43:07 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:33.665 14:43:07 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:33.665 14:43:07 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:33.665 14:43:07 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:33.665 14:43:07 nvmf_rdma.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:33.665 14:43:07 nvmf_rdma.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:33.665 14:43:07 nvmf_rdma.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:33.665 14:43:07 nvmf_rdma.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:33.665 14:43:07 nvmf_rdma.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:33.666 14:43:07 nvmf_rdma.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:33.923 14:43:07 nvmf_rdma.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:33.923 14:43:07 nvmf_rdma.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:33.923 14:43:07 nvmf_rdma.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:33.923 14:43:07 nvmf_rdma.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:33.923 14:43:07 nvmf_rdma.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:33.923 14:43:07 nvmf_rdma.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:33.923 14:43:07 nvmf_rdma.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:33.923 14:43:07 nvmf_rdma.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:33.923 14:43:07 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.923 14:43:07 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:33.923 14:43:07 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.923 14:43:07 nvmf_rdma.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:33.923 14:43:07 nvmf_rdma.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:08:33.923 14:43:07 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.923 14:43:07 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:33.923 14:43:07 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.923 14:43:07 nvmf_rdma.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:33.923 14:43:07 nvmf_rdma.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:33.923 14:43:07 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:33.923 14:43:07 nvmf_rdma.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:33.923 14:43:07 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:33.923 14:43:07 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:33.923 14:43:07 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:34.180 14:43:07 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:34.180 14:43:07 nvmf_rdma.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:34.180 14:43:07 nvmf_rdma.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:34.180 14:43:07 nvmf_rdma.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:08:34.180 14:43:07 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:34.180 14:43:07 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:08:34.180 14:43:07 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:08:34.180 14:43:07 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:08:34.180 14:43:07 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:08:34.180 14:43:07 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:34.180 14:43:07 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:08:34.180 rmmod nvme_rdma 00:08:34.180 rmmod nvme_fabrics 00:08:34.180 14:43:07 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:34.180 14:43:07 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:08:34.180 14:43:07 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:08:34.180 14:43:07 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 2715924 ']' 00:08:34.180 14:43:07 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 2715924 00:08:34.180 14:43:07 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 2715924 ']' 00:08:34.180 14:43:07 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 2715924 00:08:34.180 14:43:07 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:08:34.180 14:43:07 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:34.180 14:43:07 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2715924 00:08:34.180 14:43:07 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:34.180 14:43:07 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:34.180 14:43:08 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2715924' 00:08:34.180 killing process with pid 2715924 00:08:34.180 14:43:08 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 2715924 00:08:34.180 14:43:08 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 2715924 00:08:34.438 14:43:08 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:34.439 14:43:08 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:08:34.439 00:08:34.439 real 0m8.116s 00:08:34.439 user 0m11.739s 00:08:34.439 sys 0m4.889s 00:08:34.439 14:43:08 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:34.439 14:43:08 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:34.439 ************************************ 00:08:34.439 END TEST nvmf_referrals 00:08:34.439 ************************************ 00:08:34.439 14:43:08 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:08:34.439 14:43:08 nvmf_rdma -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:08:34.439 14:43:08 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:34.439 14:43:08 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:34.439 14:43:08 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:08:34.439 ************************************ 00:08:34.439 START TEST nvmf_connect_disconnect 00:08:34.439 ************************************ 00:08:34.439 14:43:08 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:08:34.697 * Looking for test storage... 00:08:34.697 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:34.697 14:43:08 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:34.697 14:43:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:08:34.697 14:43:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:34.697 14:43:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:34.697 14:43:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:34.697 14:43:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:34.697 14:43:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:34.697 14:43:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:34.697 14:43:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:34.697 14:43:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:34.697 14:43:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:34.697 14:43:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:34.697 14:43:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:08:34.697 14:43:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:08:34.697 14:43:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:34.697 14:43:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:34.697 14:43:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:34.697 14:43:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:34.697 14:43:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:34.697 14:43:08 nvmf_rdma.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:34.697 14:43:08 nvmf_rdma.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:34.697 14:43:08 nvmf_rdma.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:34.697 14:43:08 nvmf_rdma.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.698 14:43:08 nvmf_rdma.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.698 14:43:08 nvmf_rdma.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.698 14:43:08 nvmf_rdma.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:08:34.698 14:43:08 nvmf_rdma.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.698 14:43:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:08:34.698 14:43:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:34.698 14:43:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:34.698 14:43:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:34.698 14:43:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:34.698 14:43:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:34.698 14:43:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:34.698 14:43:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:34.698 14:43:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:34.698 14:43:08 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:34.698 14:43:08 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:34.698 14:43:08 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:34.698 14:43:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:08:34.698 14:43:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:34.698 14:43:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:34.698 14:43:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:34.698 14:43:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:34.698 14:43:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:34.698 14:43:08 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:34.698 14:43:08 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:34.698 14:43:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:34.698 14:43:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:34.698 14:43:08 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:08:34.698 14:43:08 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:08:39.955 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:08:39.955 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:08:39.955 Found net devices under 0000:da:00.0: mlx_0_0 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:08:39.955 Found net devices under 0000:da:00.1: mlx_0_1 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # rdma_device_init 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # uname 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@62 -- # modprobe ib_cm 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@63 -- # modprobe ib_core 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@64 -- # modprobe ib_umad 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@66 -- # modprobe iw_cm 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # allocate_nic_ips 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@73 -- # get_rdma_if_list 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # continue 2 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:39.955 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:39.956 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:39.956 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:39.956 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # continue 2 00:08:39.956 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:39.956 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:08:39.956 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:39.956 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:39.956 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:39.956 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:39.956 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:08:39.956 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:08:39.956 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:08:39.956 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:39.956 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:08:39.956 altname enp218s0f0np0 00:08:39.956 altname ens818f0np0 00:08:39.956 inet 192.168.100.8/24 scope global mlx_0_0 00:08:39.956 valid_lft forever preferred_lft forever 00:08:39.956 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:39.956 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:08:39.956 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:39.956 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:39.956 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:39.956 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:39.956 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:08:39.956 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:08:39.956 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:08:39.956 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:39.956 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:08:39.956 altname enp218s0f1np1 00:08:39.956 altname ens818f1np1 00:08:39.956 inet 192.168.100.9/24 scope global mlx_0_1 00:08:39.956 valid_lft forever preferred_lft forever 00:08:39.956 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:08:39.956 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:39.956 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:39.956 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:08:39.956 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:08:39.956 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@86 -- # get_rdma_if_list 00:08:39.956 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:39.956 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:39.956 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:39.956 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:39.956 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:39.956 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:39.956 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:39.956 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:39.956 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:39.956 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # continue 2 00:08:39.956 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:39.956 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:39.956 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:39.956 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:39.956 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:39.956 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:39.956 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # continue 2 00:08:39.956 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:39.956 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:08:39.956 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:39.956 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:39.956 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:39.956 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:39.956 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:39.956 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:08:39.956 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:39.956 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:39.956 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:39.956 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:39.956 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:08:39.956 192.168.100.9' 00:08:39.956 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:08:39.956 192.168.100.9' 00:08:39.956 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@457 -- # head -n 1 00:08:39.956 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:40.213 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:08:40.213 192.168.100.9' 00:08:40.213 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # tail -n +2 00:08:40.213 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # head -n 1 00:08:40.213 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:40.213 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:08:40.213 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:40.213 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:08:40.213 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:08:40.213 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:08:40.213 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:40.213 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:40.213 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:40.213 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:40.213 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=2719544 00:08:40.213 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:40.213 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 2719544 00:08:40.213 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 2719544 ']' 00:08:40.213 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:40.213 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:40.213 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:40.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:40.213 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:40.213 14:43:13 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:40.213 [2024-07-15 14:43:13.958390] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:08:40.213 [2024-07-15 14:43:13.958456] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:40.213 EAL: No free 2048 kB hugepages reported on node 1 00:08:40.213 [2024-07-15 14:43:14.014006] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:40.213 [2024-07-15 14:43:14.097289] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:40.213 [2024-07-15 14:43:14.097325] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:40.213 [2024-07-15 14:43:14.097332] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:40.213 [2024-07-15 14:43:14.097337] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:40.213 [2024-07-15 14:43:14.097342] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:40.213 [2024-07-15 14:43:14.097385] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:40.213 [2024-07-15 14:43:14.097482] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:40.213 [2024-07-15 14:43:14.097571] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:40.213 [2024-07-15 14:43:14.097572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.141 14:43:14 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:41.141 14:43:14 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:08:41.141 14:43:14 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:41.141 14:43:14 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:41.141 14:43:14 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:41.141 14:43:14 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:41.141 14:43:14 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:08:41.141 14:43:14 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.141 14:43:14 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:41.141 [2024-07-15 14:43:14.819468] rdma.c:2731:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:08:41.141 [2024-07-15 14:43:14.839377] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xb58cc0/0xb5d1b0) succeed. 00:08:41.141 [2024-07-15 14:43:14.848438] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xb5a300/0xb9e840) succeed. 00:08:41.141 14:43:14 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.141 14:43:14 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:41.141 14:43:14 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.141 14:43:14 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:41.141 14:43:14 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.141 14:43:14 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:41.141 14:43:14 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:41.141 14:43:14 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.141 14:43:14 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:41.141 14:43:14 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.141 14:43:14 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:41.141 14:43:14 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.141 14:43:14 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:41.141 14:43:14 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.141 14:43:14 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:41.141 14:43:14 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:41.141 14:43:14 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:41.141 [2024-07-15 14:43:14.987767] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:41.141 14:43:14 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:41.141 14:43:14 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:08:41.141 14:43:14 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:08:41.141 14:43:14 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:08:45.314 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:49.521 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:53.693 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:56.964 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:01.283 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:01.283 14:43:34 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:09:01.283 14:43:34 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:09:01.283 14:43:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:01.283 14:43:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:09:01.283 14:43:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:09:01.283 14:43:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:09:01.283 14:43:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:09:01.283 14:43:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:01.283 14:43:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:09:01.283 rmmod nvme_rdma 00:09:01.283 rmmod nvme_fabrics 00:09:01.283 14:43:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:01.283 14:43:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:09:01.283 14:43:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:09:01.283 14:43:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 2719544 ']' 00:09:01.283 14:43:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 2719544 00:09:01.283 14:43:34 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 2719544 ']' 00:09:01.283 14:43:34 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 2719544 00:09:01.283 14:43:34 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:09:01.283 14:43:34 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:01.283 14:43:34 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2719544 00:09:01.283 14:43:34 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:01.283 14:43:34 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:01.283 14:43:34 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2719544' 00:09:01.283 killing process with pid 2719544 00:09:01.283 14:43:34 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 2719544 00:09:01.283 14:43:34 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 2719544 00:09:01.283 14:43:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:01.283 14:43:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:09:01.283 00:09:01.283 real 0m26.732s 00:09:01.283 user 1m24.922s 00:09:01.283 sys 0m4.993s 00:09:01.283 14:43:35 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:01.283 14:43:35 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:01.283 ************************************ 00:09:01.283 END TEST nvmf_connect_disconnect 00:09:01.283 ************************************ 00:09:01.283 14:43:35 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:09:01.283 14:43:35 nvmf_rdma -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:09:01.283 14:43:35 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:01.283 14:43:35 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:01.283 14:43:35 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:09:01.283 ************************************ 00:09:01.283 START TEST nvmf_multitarget 00:09:01.283 ************************************ 00:09:01.283 14:43:35 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:09:01.540 * Looking for test storage... 00:09:01.540 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:01.540 14:43:35 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:01.540 14:43:35 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:09:01.540 14:43:35 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:01.540 14:43:35 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:01.540 14:43:35 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:01.540 14:43:35 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:01.540 14:43:35 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:01.540 14:43:35 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:01.540 14:43:35 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:01.540 14:43:35 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:01.540 14:43:35 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:01.540 14:43:35 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:01.540 14:43:35 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:09:01.540 14:43:35 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:09:01.540 14:43:35 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:01.540 14:43:35 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:01.540 14:43:35 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:01.541 14:43:35 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:01.541 14:43:35 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:01.541 14:43:35 nvmf_rdma.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:01.541 14:43:35 nvmf_rdma.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:01.541 14:43:35 nvmf_rdma.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:01.541 14:43:35 nvmf_rdma.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.541 14:43:35 nvmf_rdma.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.541 14:43:35 nvmf_rdma.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.541 14:43:35 nvmf_rdma.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:09:01.541 14:43:35 nvmf_rdma.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.541 14:43:35 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:09:01.541 14:43:35 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:01.541 14:43:35 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:01.541 14:43:35 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:01.541 14:43:35 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:01.541 14:43:35 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:01.541 14:43:35 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:01.541 14:43:35 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:01.541 14:43:35 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:01.541 14:43:35 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:09:01.541 14:43:35 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:09:01.541 14:43:35 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:09:01.541 14:43:35 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:01.541 14:43:35 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:01.541 14:43:35 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:01.541 14:43:35 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:01.541 14:43:35 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:01.541 14:43:35 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:01.541 14:43:35 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:01.541 14:43:35 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:01.541 14:43:35 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:01.541 14:43:35 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:09:01.541 14:43:35 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:06.806 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:06.806 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:09:06.806 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:06.806 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:06.806 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:06.806 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:06.806 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:06.806 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:09:06.806 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:06.806 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:09:06.806 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:09:06.806 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:09:06.806 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:09:06.806 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:09:06.806 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:09:06.806 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:06.806 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:06.806 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:06.806 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:06.806 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:06.806 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:06.806 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:06.806 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:06.806 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:06.806 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:06.806 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:06.806 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:06.806 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:09:06.806 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:09:06.806 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:09:06.806 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:09:06.806 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:09:06.806 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:06.806 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:06.806 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:09:06.806 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:09:06.806 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:06.806 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:06.806 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:06.806 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:06.806 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:06.806 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:06.806 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:06.806 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:09:06.806 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:09:06.806 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:06.806 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:06.806 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:06.806 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:06.806 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:06.806 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:06.806 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:06.806 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:09:06.806 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:06.806 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:06.806 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:06.806 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:06.806 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:06.806 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:09:06.806 Found net devices under 0000:da:00.0: mlx_0_0 00:09:06.806 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:09:06.807 Found net devices under 0000:da:00.1: mlx_0_1 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@420 -- # rdma_device_init 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@58 -- # uname 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@62 -- # modprobe ib_cm 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@63 -- # modprobe ib_core 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@64 -- # modprobe ib_umad 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@66 -- # modprobe iw_cm 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@502 -- # allocate_nic_ips 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@73 -- # get_rdma_if_list 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@105 -- # continue 2 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@105 -- # continue 2 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:09:06.807 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:06.807 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:09:06.807 altname enp218s0f0np0 00:09:06.807 altname ens818f0np0 00:09:06.807 inet 192.168.100.8/24 scope global mlx_0_0 00:09:06.807 valid_lft forever preferred_lft forever 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:09:06.807 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:06.807 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:09:06.807 altname enp218s0f1np1 00:09:06.807 altname ens818f1np1 00:09:06.807 inet 192.168.100.9/24 scope global mlx_0_1 00:09:06.807 valid_lft forever preferred_lft forever 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@86 -- # get_rdma_if_list 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@105 -- # continue 2 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@105 -- # continue 2 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:09:06.807 192.168.100.9' 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:09:06.807 192.168.100.9' 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@457 -- # head -n 1 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:09:06.807 192.168.100.9' 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@458 -- # tail -n +2 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@458 -- # head -n 1 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=2726183 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 2726183 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 2726183 ']' 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:06.807 14:43:40 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:06.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:06.808 14:43:40 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:06.808 14:43:40 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:06.808 [2024-07-15 14:43:40.636198] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:09:06.808 [2024-07-15 14:43:40.636245] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:06.808 EAL: No free 2048 kB hugepages reported on node 1 00:09:06.808 [2024-07-15 14:43:40.691299] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:07.065 [2024-07-15 14:43:40.766064] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:07.065 [2024-07-15 14:43:40.766105] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:07.065 [2024-07-15 14:43:40.766112] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:07.065 [2024-07-15 14:43:40.766118] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:07.065 [2024-07-15 14:43:40.766122] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:07.065 [2024-07-15 14:43:40.766193] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:07.065 [2024-07-15 14:43:40.766309] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:07.065 [2024-07-15 14:43:40.766399] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:07.065 [2024-07-15 14:43:40.766400] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.628 14:43:41 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:07.628 14:43:41 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:09:07.628 14:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:07.628 14:43:41 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:07.628 14:43:41 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:07.628 14:43:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:07.628 14:43:41 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:09:07.628 14:43:41 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:07.628 14:43:41 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:09:07.885 14:43:41 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:09:07.885 14:43:41 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:09:07.885 "nvmf_tgt_1" 00:09:07.885 14:43:41 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:09:07.885 "nvmf_tgt_2" 00:09:07.885 14:43:41 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:07.885 14:43:41 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:09:08.142 14:43:41 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:09:08.142 14:43:41 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:09:08.142 true 00:09:08.142 14:43:41 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:09:08.400 true 00:09:08.400 14:43:42 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:08.400 14:43:42 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:09:08.400 14:43:42 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:09:08.400 14:43:42 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:08.400 14:43:42 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:09:08.400 14:43:42 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:08.400 14:43:42 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:09:08.400 14:43:42 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:09:08.400 14:43:42 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:09:08.400 14:43:42 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:09:08.400 14:43:42 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:08.400 14:43:42 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:09:08.400 rmmod nvme_rdma 00:09:08.400 rmmod nvme_fabrics 00:09:08.400 14:43:42 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:08.400 14:43:42 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:09:08.400 14:43:42 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:09:08.400 14:43:42 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 2726183 ']' 00:09:08.400 14:43:42 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 2726183 00:09:08.400 14:43:42 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 2726183 ']' 00:09:08.400 14:43:42 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 2726183 00:09:08.401 14:43:42 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:09:08.401 14:43:42 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:08.401 14:43:42 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2726183 00:09:08.401 14:43:42 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:08.401 14:43:42 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:08.401 14:43:42 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2726183' 00:09:08.401 killing process with pid 2726183 00:09:08.401 14:43:42 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 2726183 00:09:08.401 14:43:42 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 2726183 00:09:08.659 14:43:42 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:08.659 14:43:42 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:09:08.659 00:09:08.659 real 0m7.340s 00:09:08.659 user 0m9.043s 00:09:08.659 sys 0m4.490s 00:09:08.659 14:43:42 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:08.659 14:43:42 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:08.659 ************************************ 00:09:08.659 END TEST nvmf_multitarget 00:09:08.659 ************************************ 00:09:08.659 14:43:42 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:09:08.659 14:43:42 nvmf_rdma -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:09:08.659 14:43:42 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:08.659 14:43:42 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:08.659 14:43:42 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:09:08.659 ************************************ 00:09:08.659 START TEST nvmf_rpc 00:09:08.659 ************************************ 00:09:08.659 14:43:42 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:09:08.918 * Looking for test storage... 00:09:08.918 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:08.918 14:43:42 nvmf_rdma.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:08.918 14:43:42 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:09:08.918 14:43:42 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:08.918 14:43:42 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:08.918 14:43:42 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:08.918 14:43:42 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:08.918 14:43:42 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:08.918 14:43:42 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:08.918 14:43:42 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:08.918 14:43:42 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:08.918 14:43:42 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:08.918 14:43:42 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:08.918 14:43:42 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:09:08.918 14:43:42 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:09:08.918 14:43:42 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:08.918 14:43:42 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:08.918 14:43:42 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:08.918 14:43:42 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:08.918 14:43:42 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:08.918 14:43:42 nvmf_rdma.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:08.918 14:43:42 nvmf_rdma.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:08.918 14:43:42 nvmf_rdma.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:08.918 14:43:42 nvmf_rdma.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.918 14:43:42 nvmf_rdma.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.918 14:43:42 nvmf_rdma.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.918 14:43:42 nvmf_rdma.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:09:08.918 14:43:42 nvmf_rdma.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.918 14:43:42 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:09:08.918 14:43:42 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:08.918 14:43:42 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:08.918 14:43:42 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:08.918 14:43:42 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:08.918 14:43:42 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:08.918 14:43:42 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:08.918 14:43:42 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:08.918 14:43:42 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:08.918 14:43:42 nvmf_rdma.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:09:08.918 14:43:42 nvmf_rdma.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:09:08.918 14:43:42 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:09:08.918 14:43:42 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:08.918 14:43:42 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:08.918 14:43:42 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:08.918 14:43:42 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:08.918 14:43:42 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:08.918 14:43:42 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:08.918 14:43:42 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:08.918 14:43:42 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:08.918 14:43:42 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:08.918 14:43:42 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:09:08.918 14:43:42 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:14.185 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:14.185 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:09:14.185 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:14.185 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:14.185 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:14.185 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:14.185 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:14.185 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:09:14.185 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:14.185 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:09:14.185 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:09:14.185 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:09:14.185 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:09:14.185 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:09:14.185 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:09:14.185 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:14.185 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:14.185 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:14.185 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:14.185 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:14.185 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:14.185 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:14.185 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:14.185 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:14.185 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:14.185 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:14.185 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:14.185 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:09:14.185 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:09:14.185 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:09:14.185 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:09:14.185 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:09:14.185 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:14.185 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:14.185 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:09:14.185 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:09:14.185 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:14.185 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:14.185 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:14.185 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:14.185 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:14.185 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:14.185 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:14.185 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:09:14.185 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:09:14.185 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:14.185 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:14.185 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:14.185 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:14.185 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:14.185 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:14.185 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:14.185 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:09:14.185 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:14.185 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:14.185 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:14.185 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:14.185 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:14.185 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:09:14.185 Found net devices under 0000:da:00.0: mlx_0_0 00:09:14.185 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:14.185 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:14.185 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:14.185 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:14.185 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:14.185 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:14.185 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:09:14.186 Found net devices under 0000:da:00.1: mlx_0_1 00:09:14.186 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:14.186 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:14.186 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:09:14.186 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:14.186 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:09:14.186 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:09:14.186 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@420 -- # rdma_device_init 00:09:14.186 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:09:14.186 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@58 -- # uname 00:09:14.186 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:09:14.186 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@62 -- # modprobe ib_cm 00:09:14.186 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@63 -- # modprobe ib_core 00:09:14.186 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@64 -- # modprobe ib_umad 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@66 -- # modprobe iw_cm 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@502 -- # allocate_nic_ips 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@73 -- # get_rdma_if_list 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@105 -- # continue 2 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@105 -- # continue 2 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:09:14.445 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:14.445 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:09:14.445 altname enp218s0f0np0 00:09:14.445 altname ens818f0np0 00:09:14.445 inet 192.168.100.8/24 scope global mlx_0_0 00:09:14.445 valid_lft forever preferred_lft forever 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:09:14.445 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:14.445 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:09:14.445 altname enp218s0f1np1 00:09:14.445 altname ens818f1np1 00:09:14.445 inet 192.168.100.9/24 scope global mlx_0_1 00:09:14.445 valid_lft forever preferred_lft forever 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@86 -- # get_rdma_if_list 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@105 -- # continue 2 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@105 -- # continue 2 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:09:14.445 192.168.100.9' 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:09:14.445 192.168.100.9' 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@457 -- # head -n 1 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:09:14.445 192.168.100.9' 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@458 -- # tail -n +2 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@458 -- # head -n 1 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=2729718 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 2729718 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 2729718 ']' 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:14.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:14.445 14:43:48 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:14.445 [2024-07-15 14:43:48.340172] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:09:14.445 [2024-07-15 14:43:48.340239] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:14.445 EAL: No free 2048 kB hugepages reported on node 1 00:09:14.704 [2024-07-15 14:43:48.396205] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:14.704 [2024-07-15 14:43:48.479309] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:14.704 [2024-07-15 14:43:48.479345] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:14.704 [2024-07-15 14:43:48.479352] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:14.704 [2024-07-15 14:43:48.479359] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:14.704 [2024-07-15 14:43:48.479364] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:14.704 [2024-07-15 14:43:48.479413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:14.704 [2024-07-15 14:43:48.479509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:14.704 [2024-07-15 14:43:48.479616] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:14.704 [2024-07-15 14:43:48.479617] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.269 14:43:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:15.269 14:43:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:09:15.269 14:43:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:15.269 14:43:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:15.269 14:43:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:15.526 14:43:49 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:15.526 14:43:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:09:15.526 14:43:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.526 14:43:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:15.526 14:43:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.526 14:43:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:09:15.526 "tick_rate": 2100000000, 00:09:15.526 "poll_groups": [ 00:09:15.526 { 00:09:15.526 "name": "nvmf_tgt_poll_group_000", 00:09:15.526 "admin_qpairs": 0, 00:09:15.526 "io_qpairs": 0, 00:09:15.526 "current_admin_qpairs": 0, 00:09:15.526 "current_io_qpairs": 0, 00:09:15.526 "pending_bdev_io": 0, 00:09:15.526 "completed_nvme_io": 0, 00:09:15.526 "transports": [] 00:09:15.526 }, 00:09:15.526 { 00:09:15.526 "name": "nvmf_tgt_poll_group_001", 00:09:15.526 "admin_qpairs": 0, 00:09:15.526 "io_qpairs": 0, 00:09:15.526 "current_admin_qpairs": 0, 00:09:15.526 "current_io_qpairs": 0, 00:09:15.526 "pending_bdev_io": 0, 00:09:15.526 "completed_nvme_io": 0, 00:09:15.526 "transports": [] 00:09:15.526 }, 00:09:15.526 { 00:09:15.526 "name": "nvmf_tgt_poll_group_002", 00:09:15.526 "admin_qpairs": 0, 00:09:15.526 "io_qpairs": 0, 00:09:15.526 "current_admin_qpairs": 0, 00:09:15.526 "current_io_qpairs": 0, 00:09:15.526 "pending_bdev_io": 0, 00:09:15.526 "completed_nvme_io": 0, 00:09:15.526 "transports": [] 00:09:15.526 }, 00:09:15.526 { 00:09:15.526 "name": "nvmf_tgt_poll_group_003", 00:09:15.526 "admin_qpairs": 0, 00:09:15.526 "io_qpairs": 0, 00:09:15.526 "current_admin_qpairs": 0, 00:09:15.526 "current_io_qpairs": 0, 00:09:15.526 "pending_bdev_io": 0, 00:09:15.526 "completed_nvme_io": 0, 00:09:15.526 "transports": [] 00:09:15.526 } 00:09:15.526 ] 00:09:15.526 }' 00:09:15.526 14:43:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:09:15.526 14:43:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:09:15.526 14:43:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:09:15.526 14:43:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:09:15.526 14:43:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:09:15.526 14:43:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:09:15.526 14:43:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:09:15.526 14:43:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:15.526 14:43:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.526 14:43:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:15.526 [2024-07-15 14:43:49.337437] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x871cd0/0x8761c0) succeed. 00:09:15.526 [2024-07-15 14:43:49.346548] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x873310/0x8b7850) succeed. 00:09:15.784 14:43:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.784 14:43:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:09:15.784 14:43:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.784 14:43:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:15.784 14:43:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.784 14:43:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:09:15.784 "tick_rate": 2100000000, 00:09:15.784 "poll_groups": [ 00:09:15.784 { 00:09:15.784 "name": "nvmf_tgt_poll_group_000", 00:09:15.784 "admin_qpairs": 0, 00:09:15.784 "io_qpairs": 0, 00:09:15.784 "current_admin_qpairs": 0, 00:09:15.784 "current_io_qpairs": 0, 00:09:15.784 "pending_bdev_io": 0, 00:09:15.784 "completed_nvme_io": 0, 00:09:15.784 "transports": [ 00:09:15.784 { 00:09:15.784 "trtype": "RDMA", 00:09:15.784 "pending_data_buffer": 0, 00:09:15.784 "devices": [ 00:09:15.784 { 00:09:15.784 "name": "mlx5_0", 00:09:15.784 "polls": 14473, 00:09:15.784 "idle_polls": 14473, 00:09:15.784 "completions": 0, 00:09:15.784 "requests": 0, 00:09:15.784 "request_latency": 0, 00:09:15.784 "pending_free_request": 0, 00:09:15.784 "pending_rdma_read": 0, 00:09:15.784 "pending_rdma_write": 0, 00:09:15.784 "pending_rdma_send": 0, 00:09:15.784 "total_send_wrs": 0, 00:09:15.784 "send_doorbell_updates": 0, 00:09:15.784 "total_recv_wrs": 4096, 00:09:15.784 "recv_doorbell_updates": 1 00:09:15.784 }, 00:09:15.784 { 00:09:15.784 "name": "mlx5_1", 00:09:15.784 "polls": 14473, 00:09:15.784 "idle_polls": 14473, 00:09:15.784 "completions": 0, 00:09:15.784 "requests": 0, 00:09:15.784 "request_latency": 0, 00:09:15.784 "pending_free_request": 0, 00:09:15.784 "pending_rdma_read": 0, 00:09:15.784 "pending_rdma_write": 0, 00:09:15.784 "pending_rdma_send": 0, 00:09:15.784 "total_send_wrs": 0, 00:09:15.784 "send_doorbell_updates": 0, 00:09:15.784 "total_recv_wrs": 4096, 00:09:15.784 "recv_doorbell_updates": 1 00:09:15.784 } 00:09:15.784 ] 00:09:15.784 } 00:09:15.784 ] 00:09:15.784 }, 00:09:15.784 { 00:09:15.784 "name": "nvmf_tgt_poll_group_001", 00:09:15.784 "admin_qpairs": 0, 00:09:15.784 "io_qpairs": 0, 00:09:15.784 "current_admin_qpairs": 0, 00:09:15.784 "current_io_qpairs": 0, 00:09:15.784 "pending_bdev_io": 0, 00:09:15.784 "completed_nvme_io": 0, 00:09:15.784 "transports": [ 00:09:15.784 { 00:09:15.784 "trtype": "RDMA", 00:09:15.784 "pending_data_buffer": 0, 00:09:15.784 "devices": [ 00:09:15.784 { 00:09:15.784 "name": "mlx5_0", 00:09:15.784 "polls": 9528, 00:09:15.784 "idle_polls": 9528, 00:09:15.784 "completions": 0, 00:09:15.784 "requests": 0, 00:09:15.784 "request_latency": 0, 00:09:15.784 "pending_free_request": 0, 00:09:15.784 "pending_rdma_read": 0, 00:09:15.784 "pending_rdma_write": 0, 00:09:15.784 "pending_rdma_send": 0, 00:09:15.784 "total_send_wrs": 0, 00:09:15.784 "send_doorbell_updates": 0, 00:09:15.784 "total_recv_wrs": 4096, 00:09:15.785 "recv_doorbell_updates": 1 00:09:15.785 }, 00:09:15.785 { 00:09:15.785 "name": "mlx5_1", 00:09:15.785 "polls": 9528, 00:09:15.785 "idle_polls": 9528, 00:09:15.785 "completions": 0, 00:09:15.785 "requests": 0, 00:09:15.785 "request_latency": 0, 00:09:15.785 "pending_free_request": 0, 00:09:15.785 "pending_rdma_read": 0, 00:09:15.785 "pending_rdma_write": 0, 00:09:15.785 "pending_rdma_send": 0, 00:09:15.785 "total_send_wrs": 0, 00:09:15.785 "send_doorbell_updates": 0, 00:09:15.785 "total_recv_wrs": 4096, 00:09:15.785 "recv_doorbell_updates": 1 00:09:15.785 } 00:09:15.785 ] 00:09:15.785 } 00:09:15.785 ] 00:09:15.785 }, 00:09:15.785 { 00:09:15.785 "name": "nvmf_tgt_poll_group_002", 00:09:15.785 "admin_qpairs": 0, 00:09:15.785 "io_qpairs": 0, 00:09:15.785 "current_admin_qpairs": 0, 00:09:15.785 "current_io_qpairs": 0, 00:09:15.785 "pending_bdev_io": 0, 00:09:15.785 "completed_nvme_io": 0, 00:09:15.785 "transports": [ 00:09:15.785 { 00:09:15.785 "trtype": "RDMA", 00:09:15.785 "pending_data_buffer": 0, 00:09:15.785 "devices": [ 00:09:15.785 { 00:09:15.785 "name": "mlx5_0", 00:09:15.785 "polls": 5143, 00:09:15.785 "idle_polls": 5143, 00:09:15.785 "completions": 0, 00:09:15.785 "requests": 0, 00:09:15.785 "request_latency": 0, 00:09:15.785 "pending_free_request": 0, 00:09:15.785 "pending_rdma_read": 0, 00:09:15.785 "pending_rdma_write": 0, 00:09:15.785 "pending_rdma_send": 0, 00:09:15.785 "total_send_wrs": 0, 00:09:15.785 "send_doorbell_updates": 0, 00:09:15.785 "total_recv_wrs": 4096, 00:09:15.785 "recv_doorbell_updates": 1 00:09:15.785 }, 00:09:15.785 { 00:09:15.785 "name": "mlx5_1", 00:09:15.785 "polls": 5143, 00:09:15.785 "idle_polls": 5143, 00:09:15.785 "completions": 0, 00:09:15.785 "requests": 0, 00:09:15.785 "request_latency": 0, 00:09:15.785 "pending_free_request": 0, 00:09:15.785 "pending_rdma_read": 0, 00:09:15.785 "pending_rdma_write": 0, 00:09:15.785 "pending_rdma_send": 0, 00:09:15.785 "total_send_wrs": 0, 00:09:15.785 "send_doorbell_updates": 0, 00:09:15.785 "total_recv_wrs": 4096, 00:09:15.785 "recv_doorbell_updates": 1 00:09:15.785 } 00:09:15.785 ] 00:09:15.785 } 00:09:15.785 ] 00:09:15.785 }, 00:09:15.785 { 00:09:15.785 "name": "nvmf_tgt_poll_group_003", 00:09:15.785 "admin_qpairs": 0, 00:09:15.785 "io_qpairs": 0, 00:09:15.785 "current_admin_qpairs": 0, 00:09:15.785 "current_io_qpairs": 0, 00:09:15.785 "pending_bdev_io": 0, 00:09:15.785 "completed_nvme_io": 0, 00:09:15.785 "transports": [ 00:09:15.785 { 00:09:15.785 "trtype": "RDMA", 00:09:15.785 "pending_data_buffer": 0, 00:09:15.785 "devices": [ 00:09:15.785 { 00:09:15.785 "name": "mlx5_0", 00:09:15.785 "polls": 857, 00:09:15.785 "idle_polls": 857, 00:09:15.785 "completions": 0, 00:09:15.785 "requests": 0, 00:09:15.785 "request_latency": 0, 00:09:15.785 "pending_free_request": 0, 00:09:15.785 "pending_rdma_read": 0, 00:09:15.785 "pending_rdma_write": 0, 00:09:15.785 "pending_rdma_send": 0, 00:09:15.785 "total_send_wrs": 0, 00:09:15.785 "send_doorbell_updates": 0, 00:09:15.785 "total_recv_wrs": 4096, 00:09:15.785 "recv_doorbell_updates": 1 00:09:15.785 }, 00:09:15.785 { 00:09:15.785 "name": "mlx5_1", 00:09:15.785 "polls": 857, 00:09:15.785 "idle_polls": 857, 00:09:15.785 "completions": 0, 00:09:15.785 "requests": 0, 00:09:15.785 "request_latency": 0, 00:09:15.785 "pending_free_request": 0, 00:09:15.785 "pending_rdma_read": 0, 00:09:15.785 "pending_rdma_write": 0, 00:09:15.785 "pending_rdma_send": 0, 00:09:15.785 "total_send_wrs": 0, 00:09:15.785 "send_doorbell_updates": 0, 00:09:15.785 "total_recv_wrs": 4096, 00:09:15.785 "recv_doorbell_updates": 1 00:09:15.785 } 00:09:15.785 ] 00:09:15.785 } 00:09:15.785 ] 00:09:15.785 } 00:09:15.785 ] 00:09:15.785 }' 00:09:15.785 14:43:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:09:15.785 14:43:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:09:15.785 14:43:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:09:15.785 14:43:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:15.785 14:43:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:09:15.785 14:43:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:09:15.785 14:43:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:09:15.785 14:43:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:09:15.785 14:43:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:15.785 14:43:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:09:15.785 14:43:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == rdma ']' 00:09:15.785 14:43:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@40 -- # jcount '.poll_groups[0].transports[].trtype' 00:09:15.785 14:43:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[].trtype' 00:09:15.785 14:43:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[].trtype' 00:09:15.785 14:43:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:09:15.785 14:43:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@40 -- # (( 1 == 1 )) 00:09:15.785 14:43:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@41 -- # jq -r '.poll_groups[0].transports[0].trtype' 00:09:15.785 14:43:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@41 -- # transport_type=RDMA 00:09:15.785 14:43:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@42 -- # [[ rdma == \r\d\m\a ]] 00:09:15.785 14:43:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@43 -- # jcount '.poll_groups[0].transports[0].devices[].name' 00:09:15.785 14:43:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[0].devices[].name' 00:09:15.785 14:43:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[0].devices[].name' 00:09:15.785 14:43:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:09:16.043 14:43:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@43 -- # (( 2 > 0 )) 00:09:16.043 14:43:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:09:16.043 14:43:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:09:16.043 14:43:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:09:16.043 14:43:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:16.043 14:43:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:16.043 Malloc1 00:09:16.043 14:43:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:16.043 14:43:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:16.043 14:43:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:16.043 14:43:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:16.043 14:43:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:16.043 14:43:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:16.043 14:43:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:16.043 14:43:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:16.043 14:43:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:16.043 14:43:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:09:16.043 14:43:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:16.043 14:43:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:16.043 14:43:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:16.043 14:43:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:16.043 14:43:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:16.043 14:43:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:16.043 [2024-07-15 14:43:49.772959] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:16.043 14:43:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:16.043 14:43:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -a 192.168.100.8 -s 4420 00:09:16.043 14:43:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:09:16.043 14:43:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -a 192.168.100.8 -s 4420 00:09:16.043 14:43:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:09:16.043 14:43:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:16.043 14:43:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:09:16.043 14:43:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:16.043 14:43:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:09:16.043 14:43:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:16.043 14:43:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:09:16.043 14:43:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:09:16.043 14:43:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -a 192.168.100.8 -s 4420 00:09:16.043 [2024-07-15 14:43:49.818849] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562' 00:09:16.043 Failed to write to /dev/nvme-fabrics: Input/output error 00:09:16.043 could not add new controller: failed to write to nvme-fabrics device 00:09:16.043 14:43:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:09:16.043 14:43:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:16.043 14:43:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:16.043 14:43:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:16.043 14:43:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:09:16.043 14:43:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:16.043 14:43:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:16.043 14:43:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:16.043 14:43:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:16.972 14:43:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:09:16.972 14:43:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:16.972 14:43:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:16.972 14:43:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:16.972 14:43:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:19.498 14:43:52 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:19.498 14:43:52 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:19.498 14:43:52 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:19.498 14:43:52 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:19.498 14:43:52 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:19.498 14:43:52 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:19.498 14:43:52 nvmf_rdma.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:20.066 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:20.066 14:43:53 nvmf_rdma.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:20.066 14:43:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:20.066 14:43:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:20.066 14:43:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:20.067 14:43:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:20.067 14:43:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:20.067 14:43:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:20.067 14:43:53 nvmf_rdma.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:09:20.067 14:43:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.067 14:43:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.067 14:43:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.067 14:43:53 nvmf_rdma.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:20.067 14:43:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:09:20.067 14:43:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:20.067 14:43:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:09:20.067 14:43:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:20.067 14:43:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:09:20.067 14:43:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:20.067 14:43:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:09:20.067 14:43:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:20.067 14:43:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:09:20.067 14:43:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:09:20.067 14:43:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:20.067 [2024-07-15 14:43:53.860689] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562' 00:09:20.067 Failed to write to /dev/nvme-fabrics: Input/output error 00:09:20.067 could not add new controller: failed to write to nvme-fabrics device 00:09:20.067 14:43:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:09:20.067 14:43:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:20.067 14:43:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:20.067 14:43:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:20.067 14:43:53 nvmf_rdma.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:09:20.067 14:43:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.067 14:43:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.067 14:43:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.067 14:43:53 nvmf_rdma.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:20.996 14:43:54 nvmf_rdma.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:09:20.996 14:43:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:20.996 14:43:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:20.996 14:43:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:20.996 14:43:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:23.516 14:43:56 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:23.516 14:43:56 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:23.516 14:43:56 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:23.516 14:43:56 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:23.516 14:43:56 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:23.516 14:43:56 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:23.516 14:43:56 nvmf_rdma.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:24.077 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:24.077 14:43:57 nvmf_rdma.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:24.077 14:43:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:24.077 14:43:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:24.077 14:43:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:24.077 14:43:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:24.077 14:43:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:24.077 14:43:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:24.077 14:43:57 nvmf_rdma.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:24.077 14:43:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.077 14:43:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.077 14:43:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.077 14:43:57 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:09:24.077 14:43:57 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:24.077 14:43:57 nvmf_rdma.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:24.077 14:43:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.077 14:43:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.077 14:43:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.077 14:43:57 nvmf_rdma.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:24.077 14:43:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.077 14:43:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.077 [2024-07-15 14:43:57.879089] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:24.077 14:43:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.077 14:43:57 nvmf_rdma.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:24.077 14:43:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.077 14:43:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.077 14:43:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.077 14:43:57 nvmf_rdma.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:24.077 14:43:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:24.077 14:43:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.077 14:43:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:24.078 14:43:57 nvmf_rdma.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:25.008 14:43:58 nvmf_rdma.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:25.008 14:43:58 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:25.008 14:43:58 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:25.008 14:43:58 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:25.008 14:43:58 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:27.529 14:44:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:27.529 14:44:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:27.529 14:44:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:27.529 14:44:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:27.529 14:44:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:27.529 14:44:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:27.529 14:44:00 nvmf_rdma.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:28.094 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:28.094 14:44:01 nvmf_rdma.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:28.094 14:44:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:28.094 14:44:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:28.094 14:44:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:28.094 14:44:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:28.094 14:44:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:28.094 14:44:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:28.094 14:44:01 nvmf_rdma.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:28.094 14:44:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.094 14:44:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.094 14:44:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.094 14:44:01 nvmf_rdma.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:28.094 14:44:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.094 14:44:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.094 14:44:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.094 14:44:01 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:28.094 14:44:01 nvmf_rdma.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:28.094 14:44:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.094 14:44:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.094 14:44:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.094 14:44:01 nvmf_rdma.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:28.094 14:44:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.094 14:44:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.094 [2024-07-15 14:44:01.889376] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:28.094 14:44:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.094 14:44:01 nvmf_rdma.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:28.094 14:44:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.094 14:44:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.094 14:44:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.094 14:44:01 nvmf_rdma.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:28.094 14:44:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.094 14:44:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.094 14:44:01 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.094 14:44:01 nvmf_rdma.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:29.026 14:44:02 nvmf_rdma.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:29.026 14:44:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:29.026 14:44:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:29.026 14:44:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:29.026 14:44:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:31.548 14:44:04 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:31.548 14:44:04 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:31.548 14:44:04 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:31.548 14:44:04 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:31.548 14:44:04 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:31.548 14:44:04 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:31.548 14:44:04 nvmf_rdma.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:32.112 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:32.112 14:44:05 nvmf_rdma.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:32.112 14:44:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:32.112 14:44:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:32.112 14:44:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:32.112 14:44:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:32.112 14:44:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:32.112 14:44:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:32.112 14:44:05 nvmf_rdma.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:32.112 14:44:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.112 14:44:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:32.112 14:44:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.112 14:44:05 nvmf_rdma.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:32.112 14:44:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.112 14:44:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:32.112 14:44:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.112 14:44:05 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:32.112 14:44:05 nvmf_rdma.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:32.112 14:44:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.112 14:44:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:32.112 14:44:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.112 14:44:05 nvmf_rdma.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:32.112 14:44:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.112 14:44:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:32.112 [2024-07-15 14:44:05.914459] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:32.112 14:44:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.112 14:44:05 nvmf_rdma.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:32.112 14:44:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.112 14:44:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:32.112 14:44:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.112 14:44:05 nvmf_rdma.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:32.112 14:44:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:32.113 14:44:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:32.113 14:44:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:32.113 14:44:05 nvmf_rdma.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:33.043 14:44:06 nvmf_rdma.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:33.043 14:44:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:33.043 14:44:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:33.043 14:44:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:33.043 14:44:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:35.569 14:44:08 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:35.569 14:44:08 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:35.569 14:44:08 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:35.569 14:44:08 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:35.569 14:44:08 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:35.569 14:44:08 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:35.569 14:44:08 nvmf_rdma.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:36.137 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:36.137 14:44:09 nvmf_rdma.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:36.137 14:44:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:36.137 14:44:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:36.137 14:44:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:36.137 14:44:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:36.137 14:44:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:36.137 14:44:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:36.137 14:44:09 nvmf_rdma.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:36.137 14:44:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.137 14:44:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.137 14:44:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.137 14:44:09 nvmf_rdma.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:36.137 14:44:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.137 14:44:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.137 14:44:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.137 14:44:09 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:36.137 14:44:09 nvmf_rdma.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:36.137 14:44:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.137 14:44:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.137 14:44:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.137 14:44:09 nvmf_rdma.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:36.137 14:44:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.137 14:44:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.137 [2024-07-15 14:44:09.941570] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:36.137 14:44:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.137 14:44:09 nvmf_rdma.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:36.137 14:44:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.137 14:44:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.137 14:44:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.137 14:44:09 nvmf_rdma.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:36.137 14:44:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.137 14:44:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.137 14:44:09 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.137 14:44:09 nvmf_rdma.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:37.074 14:44:10 nvmf_rdma.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:37.074 14:44:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:37.074 14:44:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:37.074 14:44:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:37.074 14:44:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:39.605 14:44:12 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:39.605 14:44:12 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:39.605 14:44:12 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:39.605 14:44:12 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:39.605 14:44:12 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:39.605 14:44:12 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:39.605 14:44:12 nvmf_rdma.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:40.169 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:40.169 14:44:13 nvmf_rdma.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:40.169 14:44:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:40.169 14:44:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:40.169 14:44:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:40.169 14:44:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:40.169 14:44:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:40.169 14:44:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:40.169 14:44:13 nvmf_rdma.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:40.169 14:44:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.169 14:44:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:40.169 14:44:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:40.169 14:44:13 nvmf_rdma.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:40.169 14:44:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.169 14:44:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:40.169 14:44:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:40.169 14:44:13 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:40.169 14:44:13 nvmf_rdma.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:40.169 14:44:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.169 14:44:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:40.169 14:44:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:40.169 14:44:13 nvmf_rdma.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:40.169 14:44:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.169 14:44:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:40.169 [2024-07-15 14:44:13.958190] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:40.169 14:44:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:40.169 14:44:13 nvmf_rdma.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:40.169 14:44:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.169 14:44:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:40.169 14:44:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:40.169 14:44:13 nvmf_rdma.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:40.169 14:44:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:40.169 14:44:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:40.169 14:44:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:40.169 14:44:13 nvmf_rdma.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:41.102 14:44:14 nvmf_rdma.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:41.102 14:44:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:41.102 14:44:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:41.102 14:44:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:41.102 14:44:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:43.622 14:44:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:43.622 14:44:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:43.622 14:44:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:43.622 14:44:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:43.622 14:44:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:43.622 14:44:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:43.622 14:44:16 nvmf_rdma.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:44.189 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:44.189 14:44:17 nvmf_rdma.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:44.189 14:44:17 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:44.189 14:44:17 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:44.189 14:44:17 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:44.189 14:44:17 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:44.189 14:44:17 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:44.189 14:44:17 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:44.189 14:44:17 nvmf_rdma.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:44.189 14:44:17 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.189 14:44:17 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.189 14:44:17 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.189 14:44:17 nvmf_rdma.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:44.189 14:44:17 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.189 14:44:17 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.189 14:44:17 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.189 14:44:17 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:09:44.189 14:44:17 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:44.189 14:44:17 nvmf_rdma.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:44.189 14:44:17 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.189 14:44:17 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.189 14:44:17 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.189 14:44:17 nvmf_rdma.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:44.189 14:44:17 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.189 14:44:17 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.189 [2024-07-15 14:44:17.964529] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:44.189 14:44:17 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.189 14:44:17 nvmf_rdma.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:44.189 14:44:17 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.189 14:44:17 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.189 14:44:17 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.189 14:44:17 nvmf_rdma.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:44.189 14:44:17 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.189 14:44:17 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.189 14:44:17 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.189 14:44:17 nvmf_rdma.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:44.189 14:44:17 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.189 14:44:17 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.189 14:44:17 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.189 14:44:17 nvmf_rdma.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:44.189 14:44:17 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.189 14:44:17 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.189 14:44:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.189 14:44:18 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:44.189 14:44:18 nvmf_rdma.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:44.189 14:44:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.189 14:44:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.189 14:44:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.189 14:44:18 nvmf_rdma.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:44.189 14:44:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.189 14:44:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.189 [2024-07-15 14:44:18.012696] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:44.189 14:44:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.189 14:44:18 nvmf_rdma.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:44.189 14:44:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.189 14:44:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.189 14:44:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.189 14:44:18 nvmf_rdma.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:44.189 14:44:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.189 14:44:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.189 14:44:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.189 14:44:18 nvmf_rdma.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:44.189 14:44:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.189 14:44:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.189 14:44:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.189 14:44:18 nvmf_rdma.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:44.189 14:44:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.189 14:44:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.189 14:44:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.189 14:44:18 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:44.189 14:44:18 nvmf_rdma.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:44.189 14:44:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.189 14:44:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.189 14:44:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.189 14:44:18 nvmf_rdma.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:44.189 14:44:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.189 14:44:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.189 [2024-07-15 14:44:18.064892] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:44.189 14:44:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.189 14:44:18 nvmf_rdma.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:44.189 14:44:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.189 14:44:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.189 14:44:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.189 14:44:18 nvmf_rdma.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:44.189 14:44:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.189 14:44:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.189 14:44:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.189 14:44:18 nvmf_rdma.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:44.189 14:44:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.189 14:44:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.189 14:44:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.189 14:44:18 nvmf_rdma.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:44.189 14:44:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.189 14:44:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.189 14:44:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.189 14:44:18 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:44.189 14:44:18 nvmf_rdma.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:44.189 14:44:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.189 14:44:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.454 14:44:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.454 14:44:18 nvmf_rdma.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:44.454 14:44:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.454 14:44:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.454 [2024-07-15 14:44:18.113096] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:44.454 14:44:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.454 14:44:18 nvmf_rdma.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:44.454 14:44:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.454 14:44:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.454 14:44:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.454 14:44:18 nvmf_rdma.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:44.454 14:44:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.454 14:44:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.454 14:44:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.454 14:44:18 nvmf_rdma.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:44.454 14:44:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.454 14:44:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.454 14:44:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.454 14:44:18 nvmf_rdma.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:44.454 14:44:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.454 14:44:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.454 14:44:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.454 14:44:18 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:44.454 14:44:18 nvmf_rdma.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:44.454 14:44:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.454 14:44:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.454 14:44:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.454 14:44:18 nvmf_rdma.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:44.454 14:44:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.454 14:44:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.454 [2024-07-15 14:44:18.161218] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:44.454 14:44:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.454 14:44:18 nvmf_rdma.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:44.454 14:44:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.454 14:44:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.454 14:44:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.454 14:44:18 nvmf_rdma.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:44.454 14:44:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.454 14:44:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.454 14:44:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.454 14:44:18 nvmf_rdma.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:44.454 14:44:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.454 14:44:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.454 14:44:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.454 14:44:18 nvmf_rdma.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:44.454 14:44:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.454 14:44:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.454 14:44:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.454 14:44:18 nvmf_rdma.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:09:44.454 14:44:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.454 14:44:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.454 14:44:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.454 14:44:18 nvmf_rdma.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:09:44.454 "tick_rate": 2100000000, 00:09:44.454 "poll_groups": [ 00:09:44.454 { 00:09:44.454 "name": "nvmf_tgt_poll_group_000", 00:09:44.454 "admin_qpairs": 2, 00:09:44.454 "io_qpairs": 27, 00:09:44.454 "current_admin_qpairs": 0, 00:09:44.454 "current_io_qpairs": 0, 00:09:44.454 "pending_bdev_io": 0, 00:09:44.454 "completed_nvme_io": 126, 00:09:44.454 "transports": [ 00:09:44.454 { 00:09:44.454 "trtype": "RDMA", 00:09:44.454 "pending_data_buffer": 0, 00:09:44.454 "devices": [ 00:09:44.454 { 00:09:44.454 "name": "mlx5_0", 00:09:44.454 "polls": 3405426, 00:09:44.454 "idle_polls": 3405104, 00:09:44.454 "completions": 361, 00:09:44.454 "requests": 180, 00:09:44.454 "request_latency": 32460982, 00:09:44.454 "pending_free_request": 0, 00:09:44.454 "pending_rdma_read": 0, 00:09:44.454 "pending_rdma_write": 0, 00:09:44.454 "pending_rdma_send": 0, 00:09:44.454 "total_send_wrs": 305, 00:09:44.454 "send_doorbell_updates": 156, 00:09:44.454 "total_recv_wrs": 4276, 00:09:44.454 "recv_doorbell_updates": 156 00:09:44.454 }, 00:09:44.454 { 00:09:44.454 "name": "mlx5_1", 00:09:44.454 "polls": 3405426, 00:09:44.454 "idle_polls": 3405426, 00:09:44.454 "completions": 0, 00:09:44.454 "requests": 0, 00:09:44.454 "request_latency": 0, 00:09:44.454 "pending_free_request": 0, 00:09:44.454 "pending_rdma_read": 0, 00:09:44.454 "pending_rdma_write": 0, 00:09:44.454 "pending_rdma_send": 0, 00:09:44.454 "total_send_wrs": 0, 00:09:44.454 "send_doorbell_updates": 0, 00:09:44.454 "total_recv_wrs": 4096, 00:09:44.454 "recv_doorbell_updates": 1 00:09:44.454 } 00:09:44.454 ] 00:09:44.454 } 00:09:44.454 ] 00:09:44.454 }, 00:09:44.454 { 00:09:44.454 "name": "nvmf_tgt_poll_group_001", 00:09:44.454 "admin_qpairs": 2, 00:09:44.454 "io_qpairs": 26, 00:09:44.454 "current_admin_qpairs": 0, 00:09:44.454 "current_io_qpairs": 0, 00:09:44.454 "pending_bdev_io": 0, 00:09:44.454 "completed_nvme_io": 126, 00:09:44.454 "transports": [ 00:09:44.454 { 00:09:44.454 "trtype": "RDMA", 00:09:44.454 "pending_data_buffer": 0, 00:09:44.454 "devices": [ 00:09:44.454 { 00:09:44.454 "name": "mlx5_0", 00:09:44.454 "polls": 3423696, 00:09:44.454 "idle_polls": 3423376, 00:09:44.454 "completions": 360, 00:09:44.454 "requests": 180, 00:09:44.454 "request_latency": 31427094, 00:09:44.454 "pending_free_request": 0, 00:09:44.454 "pending_rdma_read": 0, 00:09:44.454 "pending_rdma_write": 0, 00:09:44.454 "pending_rdma_send": 0, 00:09:44.454 "total_send_wrs": 306, 00:09:44.454 "send_doorbell_updates": 156, 00:09:44.454 "total_recv_wrs": 4276, 00:09:44.454 "recv_doorbell_updates": 157 00:09:44.454 }, 00:09:44.454 { 00:09:44.454 "name": "mlx5_1", 00:09:44.454 "polls": 3423696, 00:09:44.454 "idle_polls": 3423696, 00:09:44.454 "completions": 0, 00:09:44.454 "requests": 0, 00:09:44.454 "request_latency": 0, 00:09:44.454 "pending_free_request": 0, 00:09:44.454 "pending_rdma_read": 0, 00:09:44.454 "pending_rdma_write": 0, 00:09:44.454 "pending_rdma_send": 0, 00:09:44.454 "total_send_wrs": 0, 00:09:44.454 "send_doorbell_updates": 0, 00:09:44.454 "total_recv_wrs": 4096, 00:09:44.454 "recv_doorbell_updates": 1 00:09:44.454 } 00:09:44.454 ] 00:09:44.454 } 00:09:44.454 ] 00:09:44.454 }, 00:09:44.454 { 00:09:44.454 "name": "nvmf_tgt_poll_group_002", 00:09:44.454 "admin_qpairs": 1, 00:09:44.454 "io_qpairs": 26, 00:09:44.454 "current_admin_qpairs": 0, 00:09:44.454 "current_io_qpairs": 0, 00:09:44.454 "pending_bdev_io": 0, 00:09:44.454 "completed_nvme_io": 78, 00:09:44.454 "transports": [ 00:09:44.454 { 00:09:44.454 "trtype": "RDMA", 00:09:44.454 "pending_data_buffer": 0, 00:09:44.454 "devices": [ 00:09:44.454 { 00:09:44.454 "name": "mlx5_0", 00:09:44.454 "polls": 3370522, 00:09:44.454 "idle_polls": 3370328, 00:09:44.454 "completions": 213, 00:09:44.454 "requests": 106, 00:09:44.454 "request_latency": 17679146, 00:09:44.454 "pending_free_request": 0, 00:09:44.454 "pending_rdma_read": 0, 00:09:44.454 "pending_rdma_write": 0, 00:09:44.454 "pending_rdma_send": 0, 00:09:44.454 "total_send_wrs": 172, 00:09:44.454 "send_doorbell_updates": 95, 00:09:44.455 "total_recv_wrs": 4202, 00:09:44.455 "recv_doorbell_updates": 95 00:09:44.455 }, 00:09:44.455 { 00:09:44.455 "name": "mlx5_1", 00:09:44.455 "polls": 3370522, 00:09:44.455 "idle_polls": 3370522, 00:09:44.455 "completions": 0, 00:09:44.455 "requests": 0, 00:09:44.455 "request_latency": 0, 00:09:44.455 "pending_free_request": 0, 00:09:44.455 "pending_rdma_read": 0, 00:09:44.455 "pending_rdma_write": 0, 00:09:44.455 "pending_rdma_send": 0, 00:09:44.455 "total_send_wrs": 0, 00:09:44.455 "send_doorbell_updates": 0, 00:09:44.455 "total_recv_wrs": 4096, 00:09:44.455 "recv_doorbell_updates": 1 00:09:44.455 } 00:09:44.455 ] 00:09:44.455 } 00:09:44.455 ] 00:09:44.455 }, 00:09:44.455 { 00:09:44.455 "name": "nvmf_tgt_poll_group_003", 00:09:44.455 "admin_qpairs": 2, 00:09:44.455 "io_qpairs": 26, 00:09:44.455 "current_admin_qpairs": 0, 00:09:44.455 "current_io_qpairs": 0, 00:09:44.455 "pending_bdev_io": 0, 00:09:44.455 "completed_nvme_io": 125, 00:09:44.455 "transports": [ 00:09:44.455 { 00:09:44.455 "trtype": "RDMA", 00:09:44.455 "pending_data_buffer": 0, 00:09:44.455 "devices": [ 00:09:44.455 { 00:09:44.455 "name": "mlx5_0", 00:09:44.455 "polls": 2691908, 00:09:44.455 "idle_polls": 2691601, 00:09:44.455 "completions": 358, 00:09:44.455 "requests": 179, 00:09:44.455 "request_latency": 32830974, 00:09:44.455 "pending_free_request": 0, 00:09:44.455 "pending_rdma_read": 0, 00:09:44.455 "pending_rdma_write": 0, 00:09:44.455 "pending_rdma_send": 0, 00:09:44.455 "total_send_wrs": 304, 00:09:44.455 "send_doorbell_updates": 153, 00:09:44.455 "total_recv_wrs": 4275, 00:09:44.455 "recv_doorbell_updates": 154 00:09:44.455 }, 00:09:44.455 { 00:09:44.455 "name": "mlx5_1", 00:09:44.455 "polls": 2691908, 00:09:44.455 "idle_polls": 2691908, 00:09:44.455 "completions": 0, 00:09:44.455 "requests": 0, 00:09:44.455 "request_latency": 0, 00:09:44.455 "pending_free_request": 0, 00:09:44.455 "pending_rdma_read": 0, 00:09:44.455 "pending_rdma_write": 0, 00:09:44.455 "pending_rdma_send": 0, 00:09:44.455 "total_send_wrs": 0, 00:09:44.455 "send_doorbell_updates": 0, 00:09:44.455 "total_recv_wrs": 4096, 00:09:44.455 "recv_doorbell_updates": 1 00:09:44.455 } 00:09:44.455 ] 00:09:44.455 } 00:09:44.455 ] 00:09:44.455 } 00:09:44.455 ] 00:09:44.455 }' 00:09:44.455 14:44:18 nvmf_rdma.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:09:44.455 14:44:18 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:09:44.455 14:44:18 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:09:44.455 14:44:18 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:44.455 14:44:18 nvmf_rdma.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:09:44.455 14:44:18 nvmf_rdma.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:09:44.455 14:44:18 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:09:44.455 14:44:18 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:09:44.455 14:44:18 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:44.455 14:44:18 nvmf_rdma.nvmf_rpc -- target/rpc.sh@113 -- # (( 105 > 0 )) 00:09:44.455 14:44:18 nvmf_rdma.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == rdma ']' 00:09:44.455 14:44:18 nvmf_rdma.nvmf_rpc -- target/rpc.sh@117 -- # jsum '.poll_groups[].transports[].devices[].completions' 00:09:44.455 14:44:18 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].completions' 00:09:44.455 14:44:18 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].completions' 00:09:44.455 14:44:18 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:44.455 14:44:18 nvmf_rdma.nvmf_rpc -- target/rpc.sh@117 -- # (( 1292 > 0 )) 00:09:44.455 14:44:18 nvmf_rdma.nvmf_rpc -- target/rpc.sh@118 -- # jsum '.poll_groups[].transports[].devices[].request_latency' 00:09:44.455 14:44:18 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].request_latency' 00:09:44.455 14:44:18 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].request_latency' 00:09:44.455 14:44:18 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:44.713 14:44:18 nvmf_rdma.nvmf_rpc -- target/rpc.sh@118 -- # (( 114398196 > 0 )) 00:09:44.713 14:44:18 nvmf_rdma.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:09:44.713 14:44:18 nvmf_rdma.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:09:44.713 14:44:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:44.713 14:44:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:09:44.713 14:44:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:09:44.713 14:44:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:09:44.713 14:44:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:09:44.713 14:44:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:44.713 14:44:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:09:44.713 rmmod nvme_rdma 00:09:44.713 rmmod nvme_fabrics 00:09:44.713 14:44:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:44.713 14:44:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:09:44.713 14:44:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:09:44.713 14:44:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 2729718 ']' 00:09:44.713 14:44:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 2729718 00:09:44.713 14:44:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 2729718 ']' 00:09:44.713 14:44:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 2729718 00:09:44.713 14:44:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:09:44.713 14:44:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:44.713 14:44:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2729718 00:09:44.713 14:44:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:44.713 14:44:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:44.713 14:44:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2729718' 00:09:44.713 killing process with pid 2729718 00:09:44.713 14:44:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 2729718 00:09:44.713 14:44:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 2729718 00:09:44.971 14:44:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:44.971 14:44:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:09:44.971 00:09:44.971 real 0m36.251s 00:09:44.971 user 2m2.651s 00:09:44.971 sys 0m5.711s 00:09:44.971 14:44:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:44.971 14:44:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:44.971 ************************************ 00:09:44.971 END TEST nvmf_rpc 00:09:44.971 ************************************ 00:09:44.971 14:44:18 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:09:44.971 14:44:18 nvmf_rdma -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:09:44.971 14:44:18 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:44.971 14:44:18 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:44.971 14:44:18 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:09:44.971 ************************************ 00:09:44.971 START TEST nvmf_invalid 00:09:44.971 ************************************ 00:09:44.971 14:44:18 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:09:45.230 * Looking for test storage... 00:09:45.230 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:45.230 14:44:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:45.230 14:44:18 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:09:45.230 14:44:18 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:45.230 14:44:18 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:45.230 14:44:18 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:45.230 14:44:18 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:45.230 14:44:18 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:45.230 14:44:18 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:45.230 14:44:18 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:45.230 14:44:18 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:45.230 14:44:18 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:45.230 14:44:18 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:45.230 14:44:18 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:09:45.230 14:44:18 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:09:45.230 14:44:18 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:45.230 14:44:18 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:45.230 14:44:18 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:45.230 14:44:18 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:45.230 14:44:18 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:45.230 14:44:18 nvmf_rdma.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:45.230 14:44:18 nvmf_rdma.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:45.230 14:44:18 nvmf_rdma.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:45.230 14:44:18 nvmf_rdma.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.230 14:44:18 nvmf_rdma.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.230 14:44:18 nvmf_rdma.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.230 14:44:18 nvmf_rdma.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:09:45.230 14:44:18 nvmf_rdma.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.230 14:44:18 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:09:45.230 14:44:18 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:45.230 14:44:18 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:45.230 14:44:18 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:45.230 14:44:18 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:45.230 14:44:18 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:45.230 14:44:18 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:45.230 14:44:18 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:45.230 14:44:18 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:45.230 14:44:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:09:45.230 14:44:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:45.230 14:44:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:09:45.230 14:44:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:09:45.230 14:44:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:09:45.230 14:44:18 nvmf_rdma.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:09:45.230 14:44:18 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:09:45.230 14:44:18 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:45.230 14:44:18 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:45.230 14:44:18 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:45.230 14:44:18 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:45.230 14:44:18 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:45.230 14:44:18 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:45.230 14:44:18 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:45.230 14:44:18 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:45.230 14:44:18 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:45.230 14:44:18 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:09:45.230 14:44:18 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:50.500 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:50.500 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:09:50.500 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:50.500 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:50.500 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:50.500 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:50.500 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:50.500 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:09:50.500 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:50.500 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:09:50.500 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:09:50.500 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:09:50.500 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:09:50.500 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:09:50.500 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:09:50.500 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:50.500 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:50.500 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:50.500 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:50.500 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:50.500 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:50.500 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:50.500 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:50.500 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:50.500 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:50.500 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:50.500 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:50.500 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:09:50.500 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:09:50.500 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:09:50.500 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:09:50.500 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:09:50.500 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:50.500 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:50.500 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:09:50.500 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:09:50.500 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:50.500 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:50.500 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:50.500 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:50.500 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:50.500 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:50.500 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:50.500 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:09:50.500 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:09:50.500 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:50.500 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:50.500 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:50.500 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:50.500 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:50.500 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:50.500 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:50.500 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:09:50.500 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:50.500 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:50.500 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:50.500 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:50.500 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:50.500 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:09:50.500 Found net devices under 0000:da:00.0: mlx_0_0 00:09:50.501 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:50.501 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:50.501 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:50.501 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:50.501 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:50.501 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:50.501 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:09:50.501 Found net devices under 0000:da:00.1: mlx_0_1 00:09:50.501 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:50.501 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:50.501 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:09:50.501 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:50.501 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:09:50.501 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:09:50.501 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@420 -- # rdma_device_init 00:09:50.501 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:09:50.501 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@58 -- # uname 00:09:50.501 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:09:50.501 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@62 -- # modprobe ib_cm 00:09:50.501 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@63 -- # modprobe ib_core 00:09:50.501 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@64 -- # modprobe ib_umad 00:09:50.501 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:09:50.501 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@66 -- # modprobe iw_cm 00:09:50.501 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:09:50.501 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:09:50.501 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@502 -- # allocate_nic_ips 00:09:50.501 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:50.501 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@73 -- # get_rdma_if_list 00:09:50.501 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:50.501 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:50.501 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:50.501 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:50.769 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:50.769 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:50.769 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:50.769 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:50.769 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:50.769 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@105 -- # continue 2 00:09:50.769 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:50.769 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:50.769 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:50.769 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:50.769 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:50.769 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:50.769 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@105 -- # continue 2 00:09:50.769 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:50.769 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:09:50.769 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:50.769 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:50.769 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:50.769 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:50.769 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:09:50.769 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:09:50.769 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:09:50.769 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:50.769 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:09:50.769 altname enp218s0f0np0 00:09:50.769 altname ens818f0np0 00:09:50.769 inet 192.168.100.8/24 scope global mlx_0_0 00:09:50.769 valid_lft forever preferred_lft forever 00:09:50.769 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:50.769 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:09:50.769 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:50.769 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:50.769 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:50.769 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:50.769 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:09:50.769 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:09:50.769 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:09:50.769 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:50.769 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:09:50.769 altname enp218s0f1np1 00:09:50.769 altname ens818f1np1 00:09:50.769 inet 192.168.100.9/24 scope global mlx_0_1 00:09:50.769 valid_lft forever preferred_lft forever 00:09:50.769 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:09:50.770 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:50.770 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:50.770 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:09:50.770 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:09:50.770 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@86 -- # get_rdma_if_list 00:09:50.770 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:50.770 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:50.770 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:50.770 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:50.770 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:50.770 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:50.770 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:50.770 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:50.770 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:50.770 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@105 -- # continue 2 00:09:50.770 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:50.770 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:50.770 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:50.770 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:50.770 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:50.770 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:50.770 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@105 -- # continue 2 00:09:50.770 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:50.770 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:09:50.770 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:50.770 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:50.770 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:50.770 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:50.770 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:50.770 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:09:50.770 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:50.770 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:50.770 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:50.770 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:50.770 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:09:50.770 192.168.100.9' 00:09:50.770 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:09:50.770 192.168.100.9' 00:09:50.770 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@457 -- # head -n 1 00:09:50.770 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:50.770 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:09:50.770 192.168.100.9' 00:09:50.770 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@458 -- # tail -n +2 00:09:50.770 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@458 -- # head -n 1 00:09:50.770 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:50.770 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:09:50.770 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:50.770 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:09:50.770 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:09:50.770 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:09:50.770 14:44:24 nvmf_rdma.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:09:50.770 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:50.770 14:44:24 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:50.770 14:44:24 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:50.770 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=2738047 00:09:50.770 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 2738047 00:09:50.770 14:44:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:50.770 14:44:24 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 2738047 ']' 00:09:50.770 14:44:24 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:50.770 14:44:24 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:50.770 14:44:24 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:50.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:50.770 14:44:24 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:50.770 14:44:24 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:50.770 [2024-07-15 14:44:24.584674] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:09:50.770 [2024-07-15 14:44:24.584728] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:50.770 EAL: No free 2048 kB hugepages reported on node 1 00:09:50.770 [2024-07-15 14:44:24.640183] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:51.040 [2024-07-15 14:44:24.719039] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:51.040 [2024-07-15 14:44:24.719077] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:51.040 [2024-07-15 14:44:24.719084] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:51.040 [2024-07-15 14:44:24.719090] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:51.040 [2024-07-15 14:44:24.719095] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:51.040 [2024-07-15 14:44:24.719145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:51.040 [2024-07-15 14:44:24.719245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:51.040 [2024-07-15 14:44:24.719310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:51.040 [2024-07-15 14:44:24.719311] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.609 14:44:25 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:51.609 14:44:25 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:09:51.609 14:44:25 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:51.609 14:44:25 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:51.609 14:44:25 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:51.609 14:44:25 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:51.609 14:44:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:09:51.609 14:44:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode29047 00:09:51.866 [2024-07-15 14:44:25.577992] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:09:51.866 14:44:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:09:51.866 { 00:09:51.866 "nqn": "nqn.2016-06.io.spdk:cnode29047", 00:09:51.866 "tgt_name": "foobar", 00:09:51.866 "method": "nvmf_create_subsystem", 00:09:51.866 "req_id": 1 00:09:51.866 } 00:09:51.866 Got JSON-RPC error response 00:09:51.866 response: 00:09:51.866 { 00:09:51.866 "code": -32603, 00:09:51.866 "message": "Unable to find target foobar" 00:09:51.866 }' 00:09:51.866 14:44:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:09:51.866 { 00:09:51.866 "nqn": "nqn.2016-06.io.spdk:cnode29047", 00:09:51.866 "tgt_name": "foobar", 00:09:51.866 "method": "nvmf_create_subsystem", 00:09:51.866 "req_id": 1 00:09:51.866 } 00:09:51.866 Got JSON-RPC error response 00:09:51.866 response: 00:09:51.866 { 00:09:51.866 "code": -32603, 00:09:51.866 "message": "Unable to find target foobar" 00:09:51.866 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:09:51.866 14:44:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:09:51.866 14:44:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode15258 00:09:51.866 [2024-07-15 14:44:25.762655] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15258: invalid serial number 'SPDKISFASTANDAWESOME' 00:09:52.123 14:44:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:09:52.123 { 00:09:52.123 "nqn": "nqn.2016-06.io.spdk:cnode15258", 00:09:52.123 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:52.124 "method": "nvmf_create_subsystem", 00:09:52.124 "req_id": 1 00:09:52.124 } 00:09:52.124 Got JSON-RPC error response 00:09:52.124 response: 00:09:52.124 { 00:09:52.124 "code": -32602, 00:09:52.124 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:52.124 }' 00:09:52.124 14:44:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:09:52.124 { 00:09:52.124 "nqn": "nqn.2016-06.io.spdk:cnode15258", 00:09:52.124 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:52.124 "method": "nvmf_create_subsystem", 00:09:52.124 "req_id": 1 00:09:52.124 } 00:09:52.124 Got JSON-RPC error response 00:09:52.124 response: 00:09:52.124 { 00:09:52.124 "code": -32602, 00:09:52.124 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:52.124 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:52.124 14:44:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:09:52.124 14:44:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode22804 00:09:52.124 [2024-07-15 14:44:25.947232] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22804: invalid model number 'SPDK_Controller' 00:09:52.124 14:44:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:09:52.124 { 00:09:52.124 "nqn": "nqn.2016-06.io.spdk:cnode22804", 00:09:52.124 "model_number": "SPDK_Controller\u001f", 00:09:52.124 "method": "nvmf_create_subsystem", 00:09:52.124 "req_id": 1 00:09:52.124 } 00:09:52.124 Got JSON-RPC error response 00:09:52.124 response: 00:09:52.124 { 00:09:52.124 "code": -32602, 00:09:52.124 "message": "Invalid MN SPDK_Controller\u001f" 00:09:52.124 }' 00:09:52.124 14:44:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:09:52.124 { 00:09:52.124 "nqn": "nqn.2016-06.io.spdk:cnode22804", 00:09:52.124 "model_number": "SPDK_Controller\u001f", 00:09:52.124 "method": "nvmf_create_subsystem", 00:09:52.124 "req_id": 1 00:09:52.124 } 00:09:52.124 Got JSON-RPC error response 00:09:52.124 response: 00:09:52.124 { 00:09:52.124 "code": -32602, 00:09:52.124 "message": "Invalid MN SPDK_Controller\u001f" 00:09:52.124 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:52.124 14:44:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:09:52.124 14:44:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:09:52.124 14:44:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:52.124 14:44:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:09:52.124 14:44:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:09:52.124 14:44:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:52.124 14:44:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:52.124 14:44:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:09:52.124 14:44:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:09:52.124 14:44:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:09:52.124 14:44:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:52.124 14:44:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:52.124 14:44:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:09:52.124 14:44:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:09:52.124 14:44:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:09:52.124 14:44:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:52.124 14:44:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:52.124 14:44:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:09:52.124 14:44:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:09:52.124 14:44:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:09:52.124 14:44:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:52.124 14:44:25 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:52.124 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:09:52.124 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:09:52.124 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:09:52.124 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:52.124 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:52.124 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:09:52.124 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:09:52.124 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:09:52.124 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:52.124 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:52.124 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:09:52.124 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:09:52.124 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:09:52.124 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:52.124 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:52.124 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:09:52.124 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:09:52.124 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:09:52.124 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:52.124 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:52.124 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:09:52.124 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:09:52.124 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:09:52.124 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:52.124 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:52.124 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:09:52.124 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:09:52.124 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:09:52.124 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:52.124 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:52.124 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:09:52.124 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:09:52.394 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:09:52.394 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:52.394 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:52.394 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:09:52.394 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:09:52.394 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:09:52.394 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:52.394 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:52.394 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:09:52.394 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:09:52.394 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:09:52.394 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:52.394 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:52.394 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:09:52.394 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:09:52.394 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:09:52.394 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:52.394 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:52.394 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:09:52.394 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:09:52.394 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:09:52.394 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:52.394 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:52.394 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:09:52.394 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:09:52.394 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:09:52.394 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:52.394 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:52.394 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:09:52.394 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:09:52.394 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:09:52.394 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:52.394 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:52.394 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:09:52.394 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:09:52.394 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:09:52.394 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:52.394 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:52.394 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:09:52.394 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:09:52.394 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:09:52.394 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:52.394 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:52.394 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:09:52.394 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:09:52.394 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:09:52.394 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:52.394 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:52.394 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:09:52.394 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:09:52.394 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:09:52.394 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:52.394 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:52.394 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:09:52.394 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:09:52.394 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:09:52.394 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:52.394 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:52.394 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@28 -- # [[ L == \- ]] 00:09:52.394 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@31 -- # echo 'L\m)"\${I&+v_9).M>Inp' 00:09:52.394 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'L\m)"\${I&+v_9).M>Inp' nqn.2016-06.io.spdk:cnode13743 00:09:52.394 [2024-07-15 14:44:26.268306] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13743: invalid serial number 'L\m)"\${I&+v_9).M>Inp' 00:09:52.394 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:09:52.394 { 00:09:52.394 "nqn": "nqn.2016-06.io.spdk:cnode13743", 00:09:52.394 "serial_number": "L\\m)\"\\${I&+v_9).M>Inp", 00:09:52.394 "method": "nvmf_create_subsystem", 00:09:52.394 "req_id": 1 00:09:52.394 } 00:09:52.394 Got JSON-RPC error response 00:09:52.394 response: 00:09:52.394 { 00:09:52.394 "code": -32602, 00:09:52.394 "message": "Invalid SN L\\m)\"\\${I&+v_9).M>Inp" 00:09:52.394 }' 00:09:52.394 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:09:52.394 { 00:09:52.394 "nqn": "nqn.2016-06.io.spdk:cnode13743", 00:09:52.394 "serial_number": "L\\m)\"\\${I&+v_9).M>Inp", 00:09:52.394 "method": "nvmf_create_subsystem", 00:09:52.394 "req_id": 1 00:09:52.394 } 00:09:52.394 Got JSON-RPC error response 00:09:52.394 response: 00:09:52.394 { 00:09:52.394 "code": -32602, 00:09:52.394 "message": "Invalid SN L\\m)\"\\${I&+v_9).M>Inp" 00:09:52.394 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:52.394 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:09:52.394 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:09:52.394 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:52.394 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:09:52.394 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:09:52.394 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:52.394 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:52.394 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:09:52.394 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:09:52.394 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:09:52.394 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:52.394 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:52.686 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:09:52.686 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:09:52.686 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:09:52.686 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:52.686 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:52.686 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:09:52.686 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:09:52.686 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:09:52.686 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:52.686 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:52.686 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:09:52.686 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:09:52.686 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:09:52.686 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:52.686 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:52.686 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:09:52.686 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:09:52.686 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:09:52.686 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:52.686 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:52.686 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:09:52.686 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:09:52.686 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:09:52.686 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:52.686 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:52.686 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:09:52.686 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:09:52.686 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:09:52.686 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:52.686 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:52.686 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:09:52.686 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:09:52.686 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:09:52.686 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:52.686 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:52.686 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:09:52.686 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:09:52.686 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:09:52.686 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:52.686 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:52.686 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:09:52.686 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:09:52.686 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:09:52.686 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:52.686 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:52.686 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:09:52.686 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:09:52.686 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:09:52.686 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:52.686 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:52.686 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:09:52.686 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:09:52.686 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:09:52.686 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:52.686 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:52.686 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:09:52.686 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:09:52.686 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:09:52.686 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:52.686 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:52.686 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:09:52.686 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:09:52.686 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:09:52.686 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:52.686 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:52.686 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:09:52.686 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:09:52.686 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:09:52.686 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:52.686 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:52.686 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:09:52.686 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:09:52.686 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:09:52.686 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:52.686 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:52.686 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:09:52.686 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:09:52.686 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:09:52.686 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:52.686 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@28 -- # [[ a == \- ]] 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@31 -- # echo 'a\v1id~\E6|N)?6?sL/p3~0;*/' 00:09:52.687 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'a\v1id~\E6|N)?6?sL/p3~0;*/' nqn.2016-06.io.spdk:cnode23286 00:09:52.957 [2024-07-15 14:44:26.717806] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23286: invalid model number 'a\v1id~\E6|N)?6?sL/p3~0;*/' 00:09:52.957 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:09:52.957 { 00:09:52.957 "nqn": "nqn.2016-06.io.spdk:cnode23286", 00:09:52.957 "model_number": "a\\v1id~\\E6|N)?6?sL/p3~0;*/", 00:09:52.957 "method": "nvmf_create_subsystem", 00:09:52.957 "req_id": 1 00:09:52.957 } 00:09:52.957 Got JSON-RPC error response 00:09:52.957 response: 00:09:52.957 { 00:09:52.957 "code": -32602, 00:09:52.957 "message": "Invalid MN a\\v1id~\\E6|N)?6?sL/p3~0;*/" 00:09:52.957 }' 00:09:52.957 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:09:52.957 { 00:09:52.957 "nqn": "nqn.2016-06.io.spdk:cnode23286", 00:09:52.957 "model_number": "a\\v1id~\\E6|N)?6?sL/p3~0;*/", 00:09:52.957 "method": "nvmf_create_subsystem", 00:09:52.957 "req_id": 1 00:09:52.957 } 00:09:52.957 Got JSON-RPC error response 00:09:52.957 response: 00:09:52.957 { 00:09:52.957 "code": -32602, 00:09:52.957 "message": "Invalid MN a\\v1id~\\E6|N)?6?sL/p3~0;*/" 00:09:52.957 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:52.957 14:44:26 nvmf_rdma.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype rdma 00:09:53.235 [2024-07-15 14:44:26.919207] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1b57560/0x1b5ba50) succeed. 00:09:53.235 [2024-07-15 14:44:26.928362] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1b58ba0/0x1b9d0e0) succeed. 00:09:53.235 14:44:27 nvmf_rdma.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:09:53.515 14:44:27 nvmf_rdma.nvmf_invalid -- target/invalid.sh@64 -- # [[ rdma == \T\C\P ]] 00:09:53.515 14:44:27 nvmf_rdma.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:09:53.515 14:44:27 nvmf_rdma.nvmf_invalid -- target/invalid.sh@67 -- # echo '192.168.100.8 00:09:53.515 192.168.100.9' 00:09:53.515 14:44:27 nvmf_rdma.nvmf_invalid -- target/invalid.sh@67 -- # IP=192.168.100.8 00:09:53.515 14:44:27 nvmf_rdma.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t rdma -a 192.168.100.8 -s 4421 00:09:53.515 [2024-07-15 14:44:27.409931] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:09:53.797 14:44:27 nvmf_rdma.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:09:53.797 { 00:09:53.797 "nqn": "nqn.2016-06.io.spdk:cnode", 00:09:53.797 "listen_address": { 00:09:53.797 "trtype": "rdma", 00:09:53.797 "traddr": "192.168.100.8", 00:09:53.797 "trsvcid": "4421" 00:09:53.797 }, 00:09:53.797 "method": "nvmf_subsystem_remove_listener", 00:09:53.797 "req_id": 1 00:09:53.797 } 00:09:53.797 Got JSON-RPC error response 00:09:53.797 response: 00:09:53.797 { 00:09:53.797 "code": -32602, 00:09:53.797 "message": "Invalid parameters" 00:09:53.797 }' 00:09:53.797 14:44:27 nvmf_rdma.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:09:53.797 { 00:09:53.797 "nqn": "nqn.2016-06.io.spdk:cnode", 00:09:53.797 "listen_address": { 00:09:53.797 "trtype": "rdma", 00:09:53.797 "traddr": "192.168.100.8", 00:09:53.797 "trsvcid": "4421" 00:09:53.797 }, 00:09:53.797 "method": "nvmf_subsystem_remove_listener", 00:09:53.797 "req_id": 1 00:09:53.797 } 00:09:53.797 Got JSON-RPC error response 00:09:53.797 response: 00:09:53.797 { 00:09:53.797 "code": -32602, 00:09:53.797 "message": "Invalid parameters" 00:09:53.797 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:09:53.797 14:44:27 nvmf_rdma.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode23397 -i 0 00:09:53.797 [2024-07-15 14:44:27.590549] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23397: invalid cntlid range [0-65519] 00:09:53.797 14:44:27 nvmf_rdma.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:09:53.797 { 00:09:53.797 "nqn": "nqn.2016-06.io.spdk:cnode23397", 00:09:53.797 "min_cntlid": 0, 00:09:53.797 "method": "nvmf_create_subsystem", 00:09:53.797 "req_id": 1 00:09:53.797 } 00:09:53.797 Got JSON-RPC error response 00:09:53.797 response: 00:09:53.797 { 00:09:53.797 "code": -32602, 00:09:53.797 "message": "Invalid cntlid range [0-65519]" 00:09:53.797 }' 00:09:53.797 14:44:27 nvmf_rdma.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:09:53.797 { 00:09:53.797 "nqn": "nqn.2016-06.io.spdk:cnode23397", 00:09:53.797 "min_cntlid": 0, 00:09:53.797 "method": "nvmf_create_subsystem", 00:09:53.797 "req_id": 1 00:09:53.797 } 00:09:53.797 Got JSON-RPC error response 00:09:53.797 response: 00:09:53.797 { 00:09:53.797 "code": -32602, 00:09:53.797 "message": "Invalid cntlid range [0-65519]" 00:09:53.797 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:53.797 14:44:27 nvmf_rdma.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27563 -i 65520 00:09:54.055 [2024-07-15 14:44:27.787305] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27563: invalid cntlid range [65520-65519] 00:09:54.055 14:44:27 nvmf_rdma.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:09:54.055 { 00:09:54.055 "nqn": "nqn.2016-06.io.spdk:cnode27563", 00:09:54.055 "min_cntlid": 65520, 00:09:54.055 "method": "nvmf_create_subsystem", 00:09:54.055 "req_id": 1 00:09:54.055 } 00:09:54.055 Got JSON-RPC error response 00:09:54.055 response: 00:09:54.055 { 00:09:54.055 "code": -32602, 00:09:54.055 "message": "Invalid cntlid range [65520-65519]" 00:09:54.055 }' 00:09:54.055 14:44:27 nvmf_rdma.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:09:54.055 { 00:09:54.055 "nqn": "nqn.2016-06.io.spdk:cnode27563", 00:09:54.055 "min_cntlid": 65520, 00:09:54.055 "method": "nvmf_create_subsystem", 00:09:54.055 "req_id": 1 00:09:54.055 } 00:09:54.055 Got JSON-RPC error response 00:09:54.055 response: 00:09:54.055 { 00:09:54.055 "code": -32602, 00:09:54.055 "message": "Invalid cntlid range [65520-65519]" 00:09:54.055 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:54.055 14:44:27 nvmf_rdma.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17338 -I 0 00:09:54.311 [2024-07-15 14:44:27.984049] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17338: invalid cntlid range [1-0] 00:09:54.311 14:44:28 nvmf_rdma.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:09:54.311 { 00:09:54.311 "nqn": "nqn.2016-06.io.spdk:cnode17338", 00:09:54.311 "max_cntlid": 0, 00:09:54.311 "method": "nvmf_create_subsystem", 00:09:54.311 "req_id": 1 00:09:54.311 } 00:09:54.311 Got JSON-RPC error response 00:09:54.311 response: 00:09:54.311 { 00:09:54.311 "code": -32602, 00:09:54.311 "message": "Invalid cntlid range [1-0]" 00:09:54.311 }' 00:09:54.311 14:44:28 nvmf_rdma.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:09:54.311 { 00:09:54.311 "nqn": "nqn.2016-06.io.spdk:cnode17338", 00:09:54.311 "max_cntlid": 0, 00:09:54.311 "method": "nvmf_create_subsystem", 00:09:54.311 "req_id": 1 00:09:54.311 } 00:09:54.311 Got JSON-RPC error response 00:09:54.311 response: 00:09:54.311 { 00:09:54.311 "code": -32602, 00:09:54.311 "message": "Invalid cntlid range [1-0]" 00:09:54.311 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:54.311 14:44:28 nvmf_rdma.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20126 -I 65520 00:09:54.311 [2024-07-15 14:44:28.156701] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20126: invalid cntlid range [1-65520] 00:09:54.311 14:44:28 nvmf_rdma.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:09:54.311 { 00:09:54.311 "nqn": "nqn.2016-06.io.spdk:cnode20126", 00:09:54.311 "max_cntlid": 65520, 00:09:54.311 "method": "nvmf_create_subsystem", 00:09:54.311 "req_id": 1 00:09:54.311 } 00:09:54.311 Got JSON-RPC error response 00:09:54.311 response: 00:09:54.311 { 00:09:54.311 "code": -32602, 00:09:54.311 "message": "Invalid cntlid range [1-65520]" 00:09:54.311 }' 00:09:54.311 14:44:28 nvmf_rdma.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:09:54.311 { 00:09:54.311 "nqn": "nqn.2016-06.io.spdk:cnode20126", 00:09:54.311 "max_cntlid": 65520, 00:09:54.311 "method": "nvmf_create_subsystem", 00:09:54.311 "req_id": 1 00:09:54.311 } 00:09:54.311 Got JSON-RPC error response 00:09:54.311 response: 00:09:54.311 { 00:09:54.311 "code": -32602, 00:09:54.311 "message": "Invalid cntlid range [1-65520]" 00:09:54.311 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:54.311 14:44:28 nvmf_rdma.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode25611 -i 6 -I 5 00:09:54.568 [2024-07-15 14:44:28.337386] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25611: invalid cntlid range [6-5] 00:09:54.568 14:44:28 nvmf_rdma.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:09:54.568 { 00:09:54.568 "nqn": "nqn.2016-06.io.spdk:cnode25611", 00:09:54.568 "min_cntlid": 6, 00:09:54.568 "max_cntlid": 5, 00:09:54.568 "method": "nvmf_create_subsystem", 00:09:54.568 "req_id": 1 00:09:54.568 } 00:09:54.568 Got JSON-RPC error response 00:09:54.568 response: 00:09:54.568 { 00:09:54.568 "code": -32602, 00:09:54.568 "message": "Invalid cntlid range [6-5]" 00:09:54.568 }' 00:09:54.568 14:44:28 nvmf_rdma.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:09:54.568 { 00:09:54.568 "nqn": "nqn.2016-06.io.spdk:cnode25611", 00:09:54.568 "min_cntlid": 6, 00:09:54.568 "max_cntlid": 5, 00:09:54.568 "method": "nvmf_create_subsystem", 00:09:54.568 "req_id": 1 00:09:54.568 } 00:09:54.568 Got JSON-RPC error response 00:09:54.568 response: 00:09:54.568 { 00:09:54.568 "code": -32602, 00:09:54.568 "message": "Invalid cntlid range [6-5]" 00:09:54.568 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:54.568 14:44:28 nvmf_rdma.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:09:54.568 14:44:28 nvmf_rdma.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:09:54.568 { 00:09:54.568 "name": "foobar", 00:09:54.568 "method": "nvmf_delete_target", 00:09:54.568 "req_id": 1 00:09:54.568 } 00:09:54.568 Got JSON-RPC error response 00:09:54.568 response: 00:09:54.568 { 00:09:54.568 "code": -32602, 00:09:54.568 "message": "The specified target doesn'\''t exist, cannot delete it." 00:09:54.568 }' 00:09:54.568 14:44:28 nvmf_rdma.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:09:54.568 { 00:09:54.568 "name": "foobar", 00:09:54.568 "method": "nvmf_delete_target", 00:09:54.568 "req_id": 1 00:09:54.568 } 00:09:54.568 Got JSON-RPC error response 00:09:54.568 response: 00:09:54.568 { 00:09:54.568 "code": -32602, 00:09:54.568 "message": "The specified target doesn't exist, cannot delete it." 00:09:54.568 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:09:54.568 14:44:28 nvmf_rdma.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:09:54.568 14:44:28 nvmf_rdma.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:09:54.568 14:44:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:54.568 14:44:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:09:54.568 14:44:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:09:54.568 14:44:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:09:54.568 14:44:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:09:54.568 14:44:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:54.568 14:44:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:09:54.568 rmmod nvme_rdma 00:09:54.826 rmmod nvme_fabrics 00:09:54.826 14:44:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:54.826 14:44:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:09:54.826 14:44:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:09:54.826 14:44:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 2738047 ']' 00:09:54.826 14:44:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 2738047 00:09:54.826 14:44:28 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 2738047 ']' 00:09:54.826 14:44:28 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 2738047 00:09:54.826 14:44:28 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:09:54.826 14:44:28 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:54.826 14:44:28 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2738047 00:09:54.826 14:44:28 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:54.826 14:44:28 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:54.826 14:44:28 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2738047' 00:09:54.826 killing process with pid 2738047 00:09:54.826 14:44:28 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 2738047 00:09:54.827 14:44:28 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 2738047 00:09:55.084 14:44:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:55.084 14:44:28 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:09:55.084 00:09:55.084 real 0m9.966s 00:09:55.084 user 0m20.085s 00:09:55.084 sys 0m5.180s 00:09:55.084 14:44:28 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:55.084 14:44:28 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:55.084 ************************************ 00:09:55.084 END TEST nvmf_invalid 00:09:55.084 ************************************ 00:09:55.084 14:44:28 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:09:55.084 14:44:28 nvmf_rdma -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:09:55.084 14:44:28 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:55.084 14:44:28 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:55.084 14:44:28 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:09:55.084 ************************************ 00:09:55.084 START TEST nvmf_abort 00:09:55.084 ************************************ 00:09:55.084 14:44:28 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:09:55.084 * Looking for test storage... 00:09:55.084 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:55.084 14:44:28 nvmf_rdma.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:55.084 14:44:28 nvmf_rdma.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:09:55.084 14:44:28 nvmf_rdma.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:55.084 14:44:28 nvmf_rdma.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:55.084 14:44:28 nvmf_rdma.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:55.084 14:44:28 nvmf_rdma.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:55.084 14:44:28 nvmf_rdma.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:55.084 14:44:28 nvmf_rdma.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:55.084 14:44:28 nvmf_rdma.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:55.084 14:44:28 nvmf_rdma.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:55.084 14:44:28 nvmf_rdma.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:55.084 14:44:28 nvmf_rdma.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:55.084 14:44:28 nvmf_rdma.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:09:55.084 14:44:28 nvmf_rdma.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:09:55.084 14:44:28 nvmf_rdma.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:55.084 14:44:28 nvmf_rdma.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:55.084 14:44:28 nvmf_rdma.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:55.084 14:44:28 nvmf_rdma.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:55.084 14:44:28 nvmf_rdma.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:55.084 14:44:28 nvmf_rdma.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:55.084 14:44:28 nvmf_rdma.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:55.084 14:44:28 nvmf_rdma.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:55.084 14:44:28 nvmf_rdma.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.084 14:44:28 nvmf_rdma.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.084 14:44:28 nvmf_rdma.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.084 14:44:28 nvmf_rdma.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:09:55.084 14:44:28 nvmf_rdma.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.084 14:44:28 nvmf_rdma.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:09:55.084 14:44:28 nvmf_rdma.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:55.084 14:44:28 nvmf_rdma.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:55.085 14:44:28 nvmf_rdma.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:55.085 14:44:28 nvmf_rdma.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:55.085 14:44:28 nvmf_rdma.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:55.085 14:44:28 nvmf_rdma.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:55.085 14:44:29 nvmf_rdma.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:55.085 14:44:29 nvmf_rdma.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:55.342 14:44:29 nvmf_rdma.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:55.342 14:44:29 nvmf_rdma.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:09:55.342 14:44:29 nvmf_rdma.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:09:55.342 14:44:29 nvmf_rdma.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:09:55.342 14:44:29 nvmf_rdma.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:55.342 14:44:29 nvmf_rdma.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:55.342 14:44:29 nvmf_rdma.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:55.342 14:44:29 nvmf_rdma.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:55.342 14:44:29 nvmf_rdma.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:55.342 14:44:29 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:55.342 14:44:29 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:55.342 14:44:29 nvmf_rdma.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:55.342 14:44:29 nvmf_rdma.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:55.342 14:44:29 nvmf_rdma.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:09:55.342 14:44:29 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:00.667 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:00.667 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:10:00.667 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:00.667 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:00.667 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:00.667 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:00.667 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:00.667 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:10:00.667 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:00.667 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:10:00.667 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:10:00.667 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:10:00.667 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:10:00.667 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:10:00.667 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:10:00.667 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:00.667 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:00.667 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:00.667 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:00.667 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:00.667 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:00.667 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:00.667 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:00.667 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:00.667 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:00.667 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:00.667 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:00.667 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:10:00.667 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:10:00.667 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:10:00.667 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:10:00.667 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:10:00.667 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:00.667 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:00.667 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:10:00.667 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:10:00.668 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:10:00.668 Found net devices under 0000:da:00.0: mlx_0_0 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:10:00.668 Found net devices under 0000:da:00.1: mlx_0_1 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@420 -- # rdma_device_init 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@58 -- # uname 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@62 -- # modprobe ib_cm 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@63 -- # modprobe ib_core 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@64 -- # modprobe ib_umad 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@66 -- # modprobe iw_cm 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@502 -- # allocate_nic_ips 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@73 -- # get_rdma_if_list 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@105 -- # continue 2 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@105 -- # continue 2 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:10:00.668 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:00.668 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:10:00.668 altname enp218s0f0np0 00:10:00.668 altname ens818f0np0 00:10:00.668 inet 192.168.100.8/24 scope global mlx_0_0 00:10:00.668 valid_lft forever preferred_lft forever 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:10:00.668 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:00.668 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:10:00.668 altname enp218s0f1np1 00:10:00.668 altname ens818f1np1 00:10:00.668 inet 192.168.100.9/24 scope global mlx_0_1 00:10:00.668 valid_lft forever preferred_lft forever 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@86 -- # get_rdma_if_list 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@105 -- # continue 2 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@105 -- # continue 2 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:10:00.668 192.168.100.9' 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:10:00.668 192.168.100.9' 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@457 -- # head -n 1 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@458 -- # head -n 1 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:10:00.668 192.168.100.9' 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@458 -- # tail -n +2 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:00.668 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:10:00.669 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:00.669 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:10:00.669 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:10:00.669 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:10:00.669 14:44:33 nvmf_rdma.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:10:00.669 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:00.669 14:44:33 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:00.669 14:44:33 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:00.669 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=2741975 00:10:00.669 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 2741975 00:10:00.669 14:44:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:00.669 14:44:33 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 2741975 ']' 00:10:00.669 14:44:33 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:00.669 14:44:33 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:00.669 14:44:33 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:00.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:00.669 14:44:33 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:00.669 14:44:33 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:00.669 [2024-07-15 14:44:33.950630] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:10:00.669 [2024-07-15 14:44:33.950674] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:00.669 EAL: No free 2048 kB hugepages reported on node 1 00:10:00.669 [2024-07-15 14:44:34.005802] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:00.669 [2024-07-15 14:44:34.084609] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:00.669 [2024-07-15 14:44:34.084644] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:00.669 [2024-07-15 14:44:34.084651] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:00.669 [2024-07-15 14:44:34.084656] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:00.669 [2024-07-15 14:44:34.084661] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:00.669 [2024-07-15 14:44:34.084759] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:00.669 [2024-07-15 14:44:34.084866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:00.669 [2024-07-15 14:44:34.084867] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:00.925 14:44:34 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:00.925 14:44:34 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:10:00.925 14:44:34 nvmf_rdma.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:00.925 14:44:34 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:00.925 14:44:34 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:00.925 14:44:34 nvmf_rdma.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:00.925 14:44:34 nvmf_rdma.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -a 256 00:10:00.925 14:44:34 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:00.925 14:44:34 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:00.925 [2024-07-15 14:44:34.832967] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x82e200/0x8326f0) succeed. 00:10:00.925 [2024-07-15 14:44:34.841949] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x82f7a0/0x873d80) succeed. 00:10:01.182 14:44:34 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:01.182 14:44:34 nvmf_rdma.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:10:01.182 14:44:34 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:01.182 14:44:34 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:01.182 Malloc0 00:10:01.182 14:44:34 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:01.182 14:44:34 nvmf_rdma.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:01.182 14:44:34 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:01.182 14:44:34 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:01.182 Delay0 00:10:01.182 14:44:34 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:01.182 14:44:34 nvmf_rdma.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:01.182 14:44:34 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:01.182 14:44:34 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:01.182 14:44:34 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:01.182 14:44:34 nvmf_rdma.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:10:01.182 14:44:34 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:01.182 14:44:34 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:01.182 14:44:34 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:01.182 14:44:34 nvmf_rdma.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:10:01.182 14:44:34 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:01.182 14:44:34 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:01.183 [2024-07-15 14:44:34.990210] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:01.183 14:44:34 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:01.183 14:44:34 nvmf_rdma.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:10:01.183 14:44:34 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:01.183 14:44:34 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:01.183 14:44:35 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:01.183 14:44:35 nvmf_rdma.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/abort -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:10:01.183 EAL: No free 2048 kB hugepages reported on node 1 00:10:01.183 [2024-07-15 14:44:35.078238] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:03.707 Initializing NVMe Controllers 00:10:03.707 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:10:03.707 controller IO queue size 128 less than required 00:10:03.707 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:10:03.707 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:10:03.707 Initialization complete. Launching workers. 00:10:03.707 NS: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 51838 00:10:03.707 CTRLR: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 51899, failed to submit 62 00:10:03.707 success 51839, unsuccess 60, failed 0 00:10:03.707 14:44:37 nvmf_rdma.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:03.707 14:44:37 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:03.707 14:44:37 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:03.707 14:44:37 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:03.707 14:44:37 nvmf_rdma.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:10:03.707 14:44:37 nvmf_rdma.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:10:03.707 14:44:37 nvmf_rdma.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:03.707 14:44:37 nvmf_rdma.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:10:03.707 14:44:37 nvmf_rdma.nvmf_abort -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:10:03.707 14:44:37 nvmf_rdma.nvmf_abort -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:10:03.707 14:44:37 nvmf_rdma.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:10:03.707 14:44:37 nvmf_rdma.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:03.707 14:44:37 nvmf_rdma.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:10:03.707 rmmod nvme_rdma 00:10:03.707 rmmod nvme_fabrics 00:10:03.707 14:44:37 nvmf_rdma.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:03.707 14:44:37 nvmf_rdma.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:10:03.707 14:44:37 nvmf_rdma.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:10:03.707 14:44:37 nvmf_rdma.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 2741975 ']' 00:10:03.707 14:44:37 nvmf_rdma.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 2741975 00:10:03.707 14:44:37 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 2741975 ']' 00:10:03.707 14:44:37 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 2741975 00:10:03.707 14:44:37 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:10:03.707 14:44:37 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:03.707 14:44:37 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2741975 00:10:03.707 14:44:37 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:03.707 14:44:37 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:03.707 14:44:37 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2741975' 00:10:03.707 killing process with pid 2741975 00:10:03.707 14:44:37 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@967 -- # kill 2741975 00:10:03.707 14:44:37 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@972 -- # wait 2741975 00:10:03.707 14:44:37 nvmf_rdma.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:03.707 14:44:37 nvmf_rdma.nvmf_abort -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:10:03.707 00:10:03.707 real 0m8.648s 00:10:03.707 user 0m13.857s 00:10:03.707 sys 0m4.023s 00:10:03.707 14:44:37 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:03.707 14:44:37 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:03.707 ************************************ 00:10:03.707 END TEST nvmf_abort 00:10:03.707 ************************************ 00:10:03.707 14:44:37 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:10:03.707 14:44:37 nvmf_rdma -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:10:03.707 14:44:37 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:03.707 14:44:37 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:03.707 14:44:37 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:10:03.707 ************************************ 00:10:03.707 START TEST nvmf_ns_hotplug_stress 00:10:03.707 ************************************ 00:10:03.707 14:44:37 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:10:03.965 * Looking for test storage... 00:10:03.965 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:03.965 14:44:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:03.965 14:44:37 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:10:03.965 14:44:37 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:03.965 14:44:37 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:03.965 14:44:37 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:03.965 14:44:37 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:03.965 14:44:37 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:03.965 14:44:37 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:03.965 14:44:37 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:03.965 14:44:37 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:03.965 14:44:37 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:03.965 14:44:37 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:03.965 14:44:37 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:10:03.965 14:44:37 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:10:03.965 14:44:37 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:03.965 14:44:37 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:03.965 14:44:37 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:03.965 14:44:37 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:03.965 14:44:37 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:03.965 14:44:37 nvmf_rdma.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:03.965 14:44:37 nvmf_rdma.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:03.965 14:44:37 nvmf_rdma.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:03.965 14:44:37 nvmf_rdma.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.965 14:44:37 nvmf_rdma.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.965 14:44:37 nvmf_rdma.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.965 14:44:37 nvmf_rdma.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:10:03.965 14:44:37 nvmf_rdma.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:03.965 14:44:37 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:10:03.965 14:44:37 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:03.965 14:44:37 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:03.965 14:44:37 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:03.965 14:44:37 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:03.965 14:44:37 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:03.965 14:44:37 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:03.965 14:44:37 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:03.965 14:44:37 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:03.965 14:44:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:03.965 14:44:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:10:03.965 14:44:37 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:10:03.965 14:44:37 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:03.965 14:44:37 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:03.965 14:44:37 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:03.965 14:44:37 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:03.965 14:44:37 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:03.965 14:44:37 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:03.965 14:44:37 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:03.965 14:44:37 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:03.965 14:44:37 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:03.965 14:44:37 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:10:03.965 14:44:37 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:09.247 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:09.247 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:10:09.247 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:09.247 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:09.247 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:09.247 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:09.247 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:09.247 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:10:09.247 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:09.247 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:10:09.247 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:10:09.247 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:10:09.247 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:10:09.247 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:10:09.247 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:10:09.247 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:09.247 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:09.247 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:09.247 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:09.247 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:09.247 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:09.247 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:09.247 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:09.247 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:09.247 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:09.247 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:09.247 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:09.247 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:10:09.247 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:10:09.247 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:10:09.247 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:10:09.247 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:10:09.247 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:09.247 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:09.247 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:10:09.247 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:10:09.247 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:09.247 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:09.247 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:09.247 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:09.247 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:09.247 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:09.247 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:09.247 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:10:09.247 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:10:09.247 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:09.247 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:09.247 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:09.247 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:09.247 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:09.247 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:09.247 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:09.247 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:10:09.247 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:09.247 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:09.247 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:09.247 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:09.247 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:09.247 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:10:09.247 Found net devices under 0000:da:00.0: mlx_0_0 00:10:09.247 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:09.247 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:10:09.248 Found net devices under 0000:da:00.1: mlx_0_1 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # rdma_device_init 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # uname 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@62 -- # modprobe ib_cm 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@63 -- # modprobe ib_core 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@64 -- # modprobe ib_umad 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@66 -- # modprobe iw_cm 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # allocate_nic_ips 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@73 -- # get_rdma_if_list 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # continue 2 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # continue 2 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:10:09.248 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:09.248 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:10:09.248 altname enp218s0f0np0 00:10:09.248 altname ens818f0np0 00:10:09.248 inet 192.168.100.8/24 scope global mlx_0_0 00:10:09.248 valid_lft forever preferred_lft forever 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:10:09.248 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:09.248 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:10:09.248 altname enp218s0f1np1 00:10:09.248 altname ens818f1np1 00:10:09.248 inet 192.168.100.9/24 scope global mlx_0_1 00:10:09.248 valid_lft forever preferred_lft forever 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@86 -- # get_rdma_if_list 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # continue 2 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # continue 2 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:10:09.248 192.168.100.9' 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:10:09.248 192.168.100.9' 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@457 -- # head -n 1 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # head -n 1 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:10:09.248 192.168.100.9' 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # tail -n +2 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:09.248 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:09.249 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:09.249 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=2745632 00:10:09.249 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:09.249 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 2745632 00:10:09.249 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 2745632 ']' 00:10:09.249 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:09.249 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:09.249 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:09.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:09.249 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:09.249 14:44:42 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:09.249 [2024-07-15 14:44:43.014630] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:10:09.249 [2024-07-15 14:44:43.014677] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:09.249 EAL: No free 2048 kB hugepages reported on node 1 00:10:09.249 [2024-07-15 14:44:43.070699] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:09.249 [2024-07-15 14:44:43.145756] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:09.249 [2024-07-15 14:44:43.145798] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:09.249 [2024-07-15 14:44:43.145804] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:09.249 [2024-07-15 14:44:43.145811] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:09.249 [2024-07-15 14:44:43.145816] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:09.249 [2024-07-15 14:44:43.145932] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:09.249 [2024-07-15 14:44:43.146044] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:09.249 [2024-07-15 14:44:43.146046] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:10.181 14:44:43 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:10.181 14:44:43 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:10:10.181 14:44:43 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:10.181 14:44:43 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:10.181 14:44:43 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:10.181 14:44:43 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:10.181 14:44:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:10:10.181 14:44:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:10.181 [2024-07-15 14:44:44.025613] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x899200/0x89d6f0) succeed. 00:10:10.181 [2024-07-15 14:44:44.034633] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x89a7a0/0x8ded80) succeed. 00:10:10.437 14:44:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:10.437 14:44:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:10.692 [2024-07-15 14:44:44.510704] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:10.693 14:44:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:10:10.949 14:44:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:10:10.949 Malloc0 00:10:11.206 14:44:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:11.206 Delay0 00:10:11.206 14:44:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:11.463 14:44:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:10:11.721 NULL1 00:10:11.721 14:44:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:10:11.721 14:44:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2746045 00:10:11.721 14:44:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:10:11.721 14:44:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746045 00:10:11.721 14:44:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:11.721 EAL: No free 2048 kB hugepages reported on node 1 00:10:13.091 Read completed with error (sct=0, sc=11) 00:10:13.091 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:13.091 14:44:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:13.091 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:13.091 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:13.091 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:13.091 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:13.091 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:13.091 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:13.091 14:44:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:10:13.091 14:44:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:10:13.348 true 00:10:13.348 14:44:47 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746045 00:10:13.348 14:44:47 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:14.276 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:14.276 14:44:47 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:14.276 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:14.276 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:14.276 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:14.276 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:14.276 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:14.276 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:14.276 14:44:48 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:10:14.276 14:44:48 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:10:14.532 true 00:10:14.532 14:44:48 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746045 00:10:14.532 14:44:48 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:15.460 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:15.460 14:44:49 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:15.460 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:15.460 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:15.460 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:15.460 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:15.460 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:15.460 14:44:49 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:10:15.460 14:44:49 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:10:15.717 true 00:10:15.717 14:44:49 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746045 00:10:15.717 14:44:49 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:16.647 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:16.647 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:16.647 14:44:50 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:16.647 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:16.647 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:16.647 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:16.647 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:16.647 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:16.647 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:16.647 14:44:50 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:10:16.647 14:44:50 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:10:16.904 true 00:10:16.904 14:44:50 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746045 00:10:16.904 14:44:50 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:17.837 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:17.837 14:44:51 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:17.837 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:17.837 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:17.837 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:17.837 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:17.837 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:17.837 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:17.837 14:44:51 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:10:17.837 14:44:51 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:10:18.122 true 00:10:18.122 14:44:51 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746045 00:10:18.122 14:44:51 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:19.058 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:19.058 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:19.058 14:44:52 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:19.058 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:19.058 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:19.058 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:19.058 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:19.058 14:44:52 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:10:19.058 14:44:52 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:10:19.316 true 00:10:19.316 14:44:53 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746045 00:10:19.316 14:44:53 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:20.248 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:20.248 14:44:53 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:20.248 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:20.248 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:20.248 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:20.248 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:20.248 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:20.248 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:20.248 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:20.248 14:44:54 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:10:20.248 14:44:54 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:10:20.506 true 00:10:20.506 14:44:54 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746045 00:10:20.506 14:44:54 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:21.439 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:21.439 14:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:21.439 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:21.439 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:21.439 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:21.439 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:21.439 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:21.439 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:21.696 14:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:10:21.696 14:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:10:21.696 true 00:10:21.696 14:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746045 00:10:21.696 14:44:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:22.629 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:22.629 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:22.629 14:44:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:22.629 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:22.629 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:22.629 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:22.629 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:22.629 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:22.629 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:22.887 14:44:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:10:22.887 14:44:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:10:22.887 true 00:10:22.887 14:44:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746045 00:10:22.887 14:44:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:23.821 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:23.821 14:44:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:23.821 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:23.821 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:23.821 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:23.821 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:23.821 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:23.821 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:24.079 14:44:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:10:24.079 14:44:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:10:24.079 true 00:10:24.079 14:44:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746045 00:10:24.079 14:44:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:25.016 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:25.016 14:44:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:25.016 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:25.016 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:25.016 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:25.016 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:25.016 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:25.016 14:44:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:10:25.016 14:44:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:10:25.274 true 00:10:25.274 14:44:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746045 00:10:25.274 14:44:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:26.208 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:26.208 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:26.208 14:45:00 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:26.208 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:26.208 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:26.208 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:26.208 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:26.464 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:26.464 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:26.464 14:45:00 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:10:26.464 14:45:00 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:10:26.464 true 00:10:26.722 14:45:00 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746045 00:10:26.722 14:45:00 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:27.285 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:27.541 14:45:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:27.541 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:27.541 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:27.541 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:27.541 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:27.541 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:27.541 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:27.541 14:45:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:10:27.541 14:45:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:10:27.798 true 00:10:27.798 14:45:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746045 00:10:27.798 14:45:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:28.730 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:28.730 14:45:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:28.730 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:28.730 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:28.730 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:28.730 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:28.730 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:28.730 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:28.730 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:28.730 14:45:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:10:28.730 14:45:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:10:28.987 true 00:10:28.987 14:45:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746045 00:10:28.987 14:45:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:29.921 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:29.921 14:45:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:29.921 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:29.921 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:29.921 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:29.921 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:29.921 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:29.921 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:29.921 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:29.921 14:45:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:10:29.921 14:45:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:10:30.177 true 00:10:30.177 14:45:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746045 00:10:30.177 14:45:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:31.109 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:31.109 14:45:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:31.109 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:31.109 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:31.109 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:31.109 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:31.109 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:31.109 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:31.109 14:45:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:10:31.109 14:45:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:10:31.366 true 00:10:31.366 14:45:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746045 00:10:31.366 14:45:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:32.408 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:32.408 14:45:06 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:32.408 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:32.408 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:32.408 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:32.408 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:32.408 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:32.408 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:32.408 14:45:06 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:10:32.408 14:45:06 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:10:32.690 true 00:10:32.690 14:45:06 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746045 00:10:32.690 14:45:06 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:33.624 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:33.624 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:33.624 14:45:07 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:33.624 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:33.624 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:33.624 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:33.624 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:33.624 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:33.624 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:33.624 14:45:07 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:10:33.624 14:45:07 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:10:33.882 true 00:10:33.882 14:45:07 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746045 00:10:33.882 14:45:07 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:34.835 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:34.835 14:45:08 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:34.835 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:34.835 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:34.835 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:34.835 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:34.835 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:34.835 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:34.835 14:45:08 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:10:34.835 14:45:08 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:10:35.091 true 00:10:35.091 14:45:08 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746045 00:10:35.091 14:45:08 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:36.025 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:36.025 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:36.025 14:45:09 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:36.025 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:36.025 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:36.025 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:36.025 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:36.025 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:36.025 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:36.025 14:45:09 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:10:36.025 14:45:09 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:10:36.284 true 00:10:36.284 14:45:10 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746045 00:10:36.284 14:45:10 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:37.220 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:37.220 14:45:10 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:37.220 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:37.220 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:37.220 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:37.220 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:37.220 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:37.220 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:37.220 14:45:11 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:10:37.220 14:45:11 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:10:37.478 true 00:10:37.478 14:45:11 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746045 00:10:37.478 14:45:11 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:38.413 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:38.413 14:45:12 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:38.413 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:38.413 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:38.413 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:38.413 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:38.413 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:38.672 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:38.672 14:45:12 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:10:38.672 14:45:12 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:10:38.672 true 00:10:38.672 14:45:12 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746045 00:10:38.672 14:45:12 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:39.606 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:39.606 14:45:13 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:39.606 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:39.606 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:39.606 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:39.606 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:39.606 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:39.864 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:39.864 14:45:13 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:10:39.864 14:45:13 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:10:39.864 true 00:10:39.864 14:45:13 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746045 00:10:39.864 14:45:13 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:40.798 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:40.798 14:45:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:40.798 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:40.798 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:40.798 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:40.798 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:40.798 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:40.798 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:41.056 14:45:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:10:41.056 14:45:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:10:41.056 true 00:10:41.056 14:45:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746045 00:10:41.056 14:45:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:41.991 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:41.991 14:45:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:41.991 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:41.991 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:42.250 14:45:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:10:42.250 14:45:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:10:42.250 true 00:10:42.250 14:45:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746045 00:10:42.250 14:45:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:42.509 14:45:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:42.767 14:45:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:10:42.767 14:45:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:10:42.767 true 00:10:42.767 14:45:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746045 00:10:42.767 14:45:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:43.026 14:45:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:43.284 14:45:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:10:43.284 14:45:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:10:43.284 true 00:10:43.284 14:45:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746045 00:10:43.284 14:45:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:43.542 14:45:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:43.801 14:45:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:10:43.801 14:45:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:10:43.801 true 00:10:44.060 14:45:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746045 00:10:44.060 14:45:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:44.060 Initializing NVMe Controllers 00:10:44.060 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:10:44.060 Controller IO queue size 128, less than required. 00:10:44.060 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:44.060 Controller IO queue size 128, less than required. 00:10:44.060 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:44.060 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:44.060 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:10:44.060 Initialization complete. Launching workers. 00:10:44.060 ======================================================== 00:10:44.060 Latency(us) 00:10:44.060 Device Information : IOPS MiB/s Average min max 00:10:44.060 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5976.90 2.92 18800.15 877.16 1138014.51 00:10:44.060 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 33440.50 16.33 3827.68 1876.16 291966.72 00:10:44.060 ======================================================== 00:10:44.060 Total : 39417.40 19.25 6097.97 877.16 1138014.51 00:10:44.060 00:10:44.060 14:45:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:44.319 14:45:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:10:44.319 14:45:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:10:44.578 true 00:10:44.578 14:45:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2746045 00:10:44.578 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2746045) - No such process 00:10:44.578 14:45:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2746045 00:10:44.578 14:45:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:44.578 14:45:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:44.837 14:45:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:10:44.837 14:45:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:10:44.837 14:45:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:10:44.837 14:45:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:44.837 14:45:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:10:45.096 null0 00:10:45.096 14:45:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:45.096 14:45:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:45.096 14:45:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:10:45.096 null1 00:10:45.096 14:45:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:45.096 14:45:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:45.096 14:45:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:10:45.354 null2 00:10:45.354 14:45:19 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:45.354 14:45:19 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:45.354 14:45:19 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:10:45.612 null3 00:10:45.612 14:45:19 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:45.612 14:45:19 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:45.612 14:45:19 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:10:45.612 null4 00:10:45.612 14:45:19 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:45.612 14:45:19 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:45.612 14:45:19 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:10:45.871 null5 00:10:45.871 14:45:19 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:45.871 14:45:19 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:45.871 14:45:19 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:10:46.130 null6 00:10:46.130 14:45:19 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:46.130 14:45:19 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:46.130 14:45:19 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:10:46.130 null7 00:10:46.389 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:46.389 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:46.389 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:10:46.389 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:46.389 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:46.389 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:10:46.389 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:46.389 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:46.389 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:10:46.389 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:46.389 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.389 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:46.389 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:46.389 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:10:46.389 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:46.389 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:46.389 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:10:46.389 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:46.389 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.389 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:46.389 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:46.389 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:10:46.389 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:46.389 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:46.389 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:10:46.389 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:46.389 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.389 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:46.389 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:46.389 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:10:46.389 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:46.389 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:46.389 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:10:46.389 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:46.390 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.390 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:46.390 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:46.390 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:10:46.390 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:46.390 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:10:46.390 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:46.390 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:46.390 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.390 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:46.390 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:46.390 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:10:46.390 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:46.390 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:46.390 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:10:46.390 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:46.390 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.390 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:46.390 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:46.390 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:46.390 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:46.390 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:10:46.390 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:10:46.390 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:46.390 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:46.390 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:46.390 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.390 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:46.390 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2752525 2752528 2752532 2752535 2752541 2752546 2752549 2752552 00:10:46.390 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:10:46.390 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:46.390 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:10:46.390 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:46.390 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.390 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:46.390 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:46.390 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:46.390 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:46.390 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:46.390 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:46.390 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:46.390 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:46.390 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:46.648 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:46.648 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.648 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:46.648 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:46.648 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.648 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:46.648 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:46.648 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.648 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:46.648 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:46.648 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.648 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:46.648 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:46.648 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.649 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:46.649 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:46.649 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.649 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:46.649 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:46.649 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:46.649 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.649 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.649 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:46.649 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:46.906 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:46.906 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:46.906 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:46.906 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:46.906 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:46.906 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:46.907 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:46.907 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:46.907 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:46.907 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.907 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:46.907 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:46.907 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:46.907 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:47.164 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.164 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.164 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:47.164 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.164 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.164 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:47.164 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.164 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.164 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:47.164 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.164 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.164 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:47.164 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.164 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.164 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:47.164 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.164 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.164 14:45:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:47.164 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:47.164 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:47.164 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:47.164 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:47.164 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:47.164 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:47.164 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:47.164 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:47.421 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.422 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.422 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:47.422 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.422 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.422 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:47.422 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.422 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.422 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:47.422 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.422 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.422 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:47.422 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.422 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.422 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:47.422 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.422 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.422 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:47.422 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.422 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.422 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.422 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:47.422 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.422 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:47.679 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:47.679 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:47.679 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:47.679 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:47.679 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:47.679 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:47.680 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:47.680 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:47.680 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.680 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.680 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.680 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:47.680 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.680 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:47.680 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.680 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.680 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:47.680 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.680 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.680 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:47.680 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.680 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.680 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:47.938 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.938 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.938 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:47.938 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.938 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.938 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:47.938 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:47.938 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:47.938 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:47.938 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:47.938 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:47.938 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:47.938 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:47.938 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:47.938 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:47.939 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:47.939 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:48.196 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.196 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.196 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:48.196 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.196 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.196 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:48.196 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.196 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.196 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:48.196 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.196 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.196 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:48.196 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.196 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.196 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:48.196 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.196 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.196 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:48.196 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.196 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.196 14:45:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:48.196 14:45:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.196 14:45:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.196 14:45:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:48.454 14:45:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:48.454 14:45:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:48.454 14:45:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:48.454 14:45:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:48.454 14:45:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:48.454 14:45:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:48.454 14:45:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:48.454 14:45:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:48.454 14:45:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.454 14:45:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.454 14:45:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:48.454 14:45:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.454 14:45:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.454 14:45:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:48.454 14:45:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.454 14:45:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.455 14:45:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:48.455 14:45:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.455 14:45:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.455 14:45:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:48.455 14:45:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.455 14:45:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.455 14:45:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:48.455 14:45:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.455 14:45:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.455 14:45:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:48.455 14:45:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.455 14:45:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.455 14:45:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:48.455 14:45:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.455 14:45:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.455 14:45:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:48.713 14:45:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:48.713 14:45:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:48.713 14:45:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:48.713 14:45:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:48.713 14:45:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:48.713 14:45:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:48.713 14:45:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:48.713 14:45:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:48.971 14:45:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.971 14:45:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.971 14:45:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:48.971 14:45:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.971 14:45:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.971 14:45:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:48.971 14:45:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.971 14:45:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.971 14:45:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:48.971 14:45:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.971 14:45:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.971 14:45:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.971 14:45:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:48.971 14:45:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.971 14:45:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:48.971 14:45:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.971 14:45:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.971 14:45:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:48.971 14:45:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.971 14:45:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.971 14:45:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:48.971 14:45:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:48.971 14:45:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:48.971 14:45:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:49.229 14:45:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:49.229 14:45:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:49.229 14:45:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:49.229 14:45:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:49.229 14:45:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:49.229 14:45:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:49.229 14:45:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:49.229 14:45:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:49.229 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.229 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.229 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:49.229 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.229 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.229 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:49.229 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.229 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.229 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:49.229 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.229 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.229 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:49.229 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.229 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.229 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:49.229 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.229 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.229 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:49.229 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.229 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.229 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:49.229 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.229 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.229 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:49.485 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:49.485 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:49.485 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:49.485 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:49.485 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:49.485 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:49.485 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:49.485 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:49.741 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.741 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.741 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:49.741 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.741 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.741 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:49.741 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.741 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.741 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:49.741 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.741 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.741 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:49.741 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.741 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.741 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:49.741 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.741 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.741 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.741 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.741 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:49.741 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:49.741 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.741 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.741 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:49.741 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:49.741 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:49.741 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:49.741 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:49.741 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:49.741 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:49.741 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:49.998 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:49.998 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.998 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.998 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.998 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.998 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.998 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.998 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.998 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.998 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.998 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.998 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.998 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.998 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.998 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.998 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:49.998 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:49.998 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:10:49.998 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:10:49.998 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:49.998 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:10:49.998 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:10:49.998 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:10:49.998 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:10:49.998 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:49.998 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:10:49.998 rmmod nvme_rdma 00:10:49.998 rmmod nvme_fabrics 00:10:49.998 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:49.998 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:10:49.998 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:10:49.998 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 2745632 ']' 00:10:49.998 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 2745632 00:10:49.998 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 2745632 ']' 00:10:49.998 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 2745632 00:10:49.998 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:10:49.998 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:49.998 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2745632 00:10:50.255 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:50.255 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:50.255 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2745632' 00:10:50.255 killing process with pid 2745632 00:10:50.256 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 2745632 00:10:50.256 14:45:23 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 2745632 00:10:50.514 14:45:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:50.514 14:45:24 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:10:50.514 00:10:50.514 real 0m46.589s 00:10:50.514 user 3m16.777s 00:10:50.514 sys 0m11.200s 00:10:50.514 14:45:24 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:50.514 14:45:24 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:50.514 ************************************ 00:10:50.514 END TEST nvmf_ns_hotplug_stress 00:10:50.514 ************************************ 00:10:50.514 14:45:24 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:10:50.514 14:45:24 nvmf_rdma -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:10:50.514 14:45:24 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:50.514 14:45:24 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:50.514 14:45:24 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:10:50.514 ************************************ 00:10:50.514 START TEST nvmf_connect_stress 00:10:50.514 ************************************ 00:10:50.514 14:45:24 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:10:50.514 * Looking for test storage... 00:10:50.514 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:50.514 14:45:24 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:50.514 14:45:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:10:50.514 14:45:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:50.514 14:45:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:50.514 14:45:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:50.514 14:45:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:50.514 14:45:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:50.514 14:45:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:50.514 14:45:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:50.514 14:45:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:50.514 14:45:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:50.514 14:45:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:50.514 14:45:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:10:50.514 14:45:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:10:50.514 14:45:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:50.514 14:45:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:50.514 14:45:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:50.514 14:45:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:50.514 14:45:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:50.514 14:45:24 nvmf_rdma.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:50.514 14:45:24 nvmf_rdma.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:50.514 14:45:24 nvmf_rdma.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:50.514 14:45:24 nvmf_rdma.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.514 14:45:24 nvmf_rdma.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.514 14:45:24 nvmf_rdma.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.514 14:45:24 nvmf_rdma.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:10:50.514 14:45:24 nvmf_rdma.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.514 14:45:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:10:50.514 14:45:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:50.514 14:45:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:50.514 14:45:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:50.514 14:45:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:50.514 14:45:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:50.514 14:45:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:50.514 14:45:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:50.514 14:45:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:50.514 14:45:24 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:10:50.514 14:45:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:10:50.514 14:45:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:50.514 14:45:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:50.514 14:45:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:50.514 14:45:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:50.514 14:45:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:50.514 14:45:24 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:50.514 14:45:24 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:50.514 14:45:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:50.514 14:45:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:50.514 14:45:24 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:10:50.514 14:45:24 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:10:55.776 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:10:55.776 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:10:55.776 Found net devices under 0000:da:00.0: mlx_0_0 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:10:55.776 Found net devices under 0000:da:00.1: mlx_0_1 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@420 -- # rdma_device_init 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@58 -- # uname 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@62 -- # modprobe ib_cm 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@63 -- # modprobe ib_core 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@64 -- # modprobe ib_umad 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@66 -- # modprobe iw_cm 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@502 -- # allocate_nic_ips 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@73 -- # get_rdma_if_list 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@105 -- # continue 2 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@105 -- # continue 2 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:10:55.776 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:55.776 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:10:55.776 altname enp218s0f0np0 00:10:55.776 altname ens818f0np0 00:10:55.776 inet 192.168.100.8/24 scope global mlx_0_0 00:10:55.776 valid_lft forever preferred_lft forever 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:10:55.776 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:10:55.777 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:10:56.035 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:56.035 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:10:56.035 altname enp218s0f1np1 00:10:56.035 altname ens818f1np1 00:10:56.035 inet 192.168.100.9/24 scope global mlx_0_1 00:10:56.035 valid_lft forever preferred_lft forever 00:10:56.035 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:10:56.035 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:56.035 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:56.035 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:10:56.035 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:10:56.035 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@86 -- # get_rdma_if_list 00:10:56.035 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:56.035 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:56.035 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:56.035 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:56.035 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:56.035 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:56.035 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:56.035 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:56.035 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:56.035 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@105 -- # continue 2 00:10:56.035 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:56.035 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:56.035 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:56.035 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:56.035 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:56.035 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:56.035 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@105 -- # continue 2 00:10:56.035 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:56.035 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:10:56.035 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:56.035 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:56.035 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:56.035 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:56.035 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:56.035 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:10:56.035 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:56.035 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:56.035 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:56.035 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:56.035 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:10:56.035 192.168.100.9' 00:10:56.035 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:10:56.035 192.168.100.9' 00:10:56.035 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@457 -- # head -n 1 00:10:56.035 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:56.035 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:10:56.035 192.168.100.9' 00:10:56.035 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@458 -- # tail -n +2 00:10:56.035 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@458 -- # head -n 1 00:10:56.035 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:56.035 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:10:56.035 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:56.035 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:10:56.035 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:10:56.035 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:10:56.035 14:45:29 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:10:56.035 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:56.035 14:45:29 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:56.035 14:45:29 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:56.035 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:56.035 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=2756437 00:10:56.035 14:45:29 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 2756437 00:10:56.035 14:45:29 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 2756437 ']' 00:10:56.035 14:45:29 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:56.035 14:45:29 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:56.035 14:45:29 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:56.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:56.035 14:45:29 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:56.035 14:45:29 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:56.035 [2024-07-15 14:45:29.816477] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:10:56.035 [2024-07-15 14:45:29.816523] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:56.035 EAL: No free 2048 kB hugepages reported on node 1 00:10:56.035 [2024-07-15 14:45:29.866213] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:56.035 [2024-07-15 14:45:29.944268] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:56.035 [2024-07-15 14:45:29.944306] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:56.035 [2024-07-15 14:45:29.944313] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:56.035 [2024-07-15 14:45:29.944319] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:56.035 [2024-07-15 14:45:29.944324] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:56.035 [2024-07-15 14:45:29.944423] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:56.035 [2024-07-15 14:45:29.944530] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:56.036 [2024-07-15 14:45:29.944531] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:56.967 14:45:30 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:56.967 14:45:30 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:10:56.967 14:45:30 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:56.967 14:45:30 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:56.967 14:45:30 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:56.967 14:45:30 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:56.967 14:45:30 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:56.967 14:45:30 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.967 14:45:30 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:56.967 [2024-07-15 14:45:30.688050] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2208200/0x220c6f0) succeed. 00:10:56.967 [2024-07-15 14:45:30.697093] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x22097a0/0x224dd80) succeed. 00:10:56.967 14:45:30 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.967 14:45:30 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:56.967 14:45:30 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.967 14:45:30 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:56.967 14:45:30 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.967 14:45:30 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:56.967 14:45:30 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.967 14:45:30 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:56.967 [2024-07-15 14:45:30.812216] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:56.967 14:45:30 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.967 14:45:30 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:56.967 14:45:30 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.967 14:45:30 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:56.967 NULL1 00:10:56.967 14:45:30 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.967 14:45:30 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2756507 00:10:56.967 14:45:30 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:56.967 14:45:30 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:10:56.967 14:45:30 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:56.967 14:45:30 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:10:56.967 14:45:30 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:56.967 14:45:30 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:56.967 14:45:30 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:56.967 14:45:30 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:56.967 14:45:30 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:56.967 14:45:30 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:56.967 14:45:30 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:56.967 14:45:30 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:56.967 14:45:30 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:56.967 14:45:30 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:56.967 14:45:30 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:56.967 14:45:30 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:56.967 EAL: No free 2048 kB hugepages reported on node 1 00:10:56.967 14:45:30 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:56.967 14:45:30 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:56.967 14:45:30 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:56.967 14:45:30 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:56.967 14:45:30 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:56.967 14:45:30 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:56.967 14:45:30 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:56.967 14:45:30 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:56.967 14:45:30 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:56.967 14:45:30 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:56.968 14:45:30 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:56.968 14:45:30 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:56.968 14:45:30 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:56.968 14:45:30 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:56.968 14:45:30 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:56.968 14:45:30 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:57.225 14:45:30 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:57.225 14:45:30 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:57.225 14:45:30 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:57.225 14:45:30 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:57.225 14:45:30 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:57.225 14:45:30 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:57.225 14:45:30 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:57.225 14:45:30 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:57.225 14:45:30 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:57.225 14:45:30 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:57.225 14:45:30 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:57.225 14:45:30 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:57.225 14:45:30 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2756507 00:10:57.225 14:45:30 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:57.225 14:45:30 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:57.225 14:45:30 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:57.482 14:45:31 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:57.482 14:45:31 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2756507 00:10:57.482 14:45:31 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:57.482 14:45:31 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:57.482 14:45:31 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:57.740 14:45:31 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:57.740 14:45:31 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2756507 00:10:57.740 14:45:31 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:57.740 14:45:31 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:57.740 14:45:31 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:57.997 14:45:31 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:57.997 14:45:31 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2756507 00:10:57.997 14:45:31 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:57.997 14:45:31 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:57.997 14:45:31 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:58.561 14:45:32 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:58.561 14:45:32 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2756507 00:10:58.561 14:45:32 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:58.561 14:45:32 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:58.561 14:45:32 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:58.818 14:45:32 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:58.818 14:45:32 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2756507 00:10:58.818 14:45:32 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:58.818 14:45:32 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:58.818 14:45:32 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:59.074 14:45:32 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.074 14:45:32 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2756507 00:10:59.074 14:45:32 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:59.074 14:45:32 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.074 14:45:32 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:59.331 14:45:33 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.331 14:45:33 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2756507 00:10:59.331 14:45:33 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:59.331 14:45:33 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.331 14:45:33 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:59.896 14:45:33 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.896 14:45:33 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2756507 00:10:59.896 14:45:33 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:59.896 14:45:33 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.896 14:45:33 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:00.153 14:45:33 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.153 14:45:33 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2756507 00:11:00.153 14:45:33 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:00.153 14:45:33 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.153 14:45:33 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:00.411 14:45:34 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.411 14:45:34 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2756507 00:11:00.411 14:45:34 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:00.411 14:45:34 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.411 14:45:34 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:00.669 14:45:34 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.669 14:45:34 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2756507 00:11:00.669 14:45:34 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:00.669 14:45:34 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.669 14:45:34 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:00.926 14:45:34 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.926 14:45:34 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2756507 00:11:00.926 14:45:34 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:00.926 14:45:34 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.926 14:45:34 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:01.491 14:45:35 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.491 14:45:35 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2756507 00:11:01.491 14:45:35 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:01.491 14:45:35 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.491 14:45:35 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:01.749 14:45:35 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.749 14:45:35 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2756507 00:11:01.749 14:45:35 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:01.749 14:45:35 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.749 14:45:35 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:02.006 14:45:35 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.006 14:45:35 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2756507 00:11:02.006 14:45:35 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:02.006 14:45:35 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.006 14:45:35 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:02.264 14:45:36 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.264 14:45:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2756507 00:11:02.264 14:45:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:02.264 14:45:36 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.264 14:45:36 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:02.828 14:45:36 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.828 14:45:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2756507 00:11:02.828 14:45:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:02.828 14:45:36 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.828 14:45:36 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:03.085 14:45:36 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:03.085 14:45:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2756507 00:11:03.085 14:45:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:03.085 14:45:36 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:03.085 14:45:36 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:03.342 14:45:37 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:03.342 14:45:37 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2756507 00:11:03.342 14:45:37 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:03.342 14:45:37 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:03.342 14:45:37 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:03.599 14:45:37 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:03.599 14:45:37 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2756507 00:11:03.599 14:45:37 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:03.599 14:45:37 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:03.599 14:45:37 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:03.855 14:45:37 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:03.855 14:45:37 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2756507 00:11:03.855 14:45:37 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:03.855 14:45:37 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:03.855 14:45:37 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:04.418 14:45:38 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.418 14:45:38 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2756507 00:11:04.418 14:45:38 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:04.418 14:45:38 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.418 14:45:38 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:04.674 14:45:38 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.674 14:45:38 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2756507 00:11:04.674 14:45:38 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:04.674 14:45:38 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.674 14:45:38 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:04.932 14:45:38 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.932 14:45:38 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2756507 00:11:04.932 14:45:38 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:04.932 14:45:38 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.932 14:45:38 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:05.189 14:45:39 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.189 14:45:39 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2756507 00:11:05.189 14:45:39 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:05.189 14:45:39 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.189 14:45:39 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:05.753 14:45:39 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.753 14:45:39 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2756507 00:11:05.753 14:45:39 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:05.753 14:45:39 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.753 14:45:39 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:06.010 14:45:39 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.010 14:45:39 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2756507 00:11:06.010 14:45:39 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:06.010 14:45:39 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.010 14:45:39 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:06.266 14:45:40 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.267 14:45:40 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2756507 00:11:06.267 14:45:40 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:06.267 14:45:40 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.267 14:45:40 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:06.522 14:45:40 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.522 14:45:40 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2756507 00:11:06.522 14:45:40 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:06.522 14:45:40 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.522 14:45:40 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:07.084 14:45:40 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.084 14:45:40 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2756507 00:11:07.084 14:45:40 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:07.084 14:45:40 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.084 14:45:40 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:07.084 Testing NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:11:07.341 14:45:41 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.341 14:45:41 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2756507 00:11:07.341 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2756507) - No such process 00:11:07.341 14:45:41 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2756507 00:11:07.341 14:45:41 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:07.341 14:45:41 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:07.341 14:45:41 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:11:07.341 14:45:41 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:07.341 14:45:41 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:11:07.341 14:45:41 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:11:07.341 14:45:41 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:11:07.341 14:45:41 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:11:07.341 14:45:41 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:07.341 14:45:41 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:11:07.341 rmmod nvme_rdma 00:11:07.341 rmmod nvme_fabrics 00:11:07.341 14:45:41 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:07.341 14:45:41 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:11:07.341 14:45:41 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:11:07.341 14:45:41 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 2756437 ']' 00:11:07.341 14:45:41 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 2756437 00:11:07.341 14:45:41 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 2756437 ']' 00:11:07.341 14:45:41 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 2756437 00:11:07.341 14:45:41 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:11:07.341 14:45:41 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:07.341 14:45:41 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2756437 00:11:07.341 14:45:41 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:07.341 14:45:41 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:07.341 14:45:41 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2756437' 00:11:07.341 killing process with pid 2756437 00:11:07.341 14:45:41 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 2756437 00:11:07.341 14:45:41 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 2756437 00:11:07.599 14:45:41 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:07.599 14:45:41 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:11:07.599 00:11:07.599 real 0m17.121s 00:11:07.599 user 0m42.032s 00:11:07.599 sys 0m5.687s 00:11:07.599 14:45:41 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:07.599 14:45:41 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:07.599 ************************************ 00:11:07.599 END TEST nvmf_connect_stress 00:11:07.599 ************************************ 00:11:07.599 14:45:41 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:11:07.599 14:45:41 nvmf_rdma -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:11:07.599 14:45:41 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:07.599 14:45:41 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:07.599 14:45:41 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:11:07.599 ************************************ 00:11:07.599 START TEST nvmf_fused_ordering 00:11:07.599 ************************************ 00:11:07.599 14:45:41 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:11:07.856 * Looking for test storage... 00:11:07.856 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:07.856 14:45:41 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:07.856 14:45:41 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:11:07.856 14:45:41 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:07.856 14:45:41 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:07.856 14:45:41 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:07.856 14:45:41 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:07.856 14:45:41 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:07.856 14:45:41 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:07.856 14:45:41 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:07.856 14:45:41 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:07.856 14:45:41 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:07.856 14:45:41 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:07.856 14:45:41 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:11:07.856 14:45:41 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:11:07.856 14:45:41 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:07.856 14:45:41 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:07.856 14:45:41 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:07.856 14:45:41 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:07.856 14:45:41 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:07.856 14:45:41 nvmf_rdma.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:07.856 14:45:41 nvmf_rdma.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:07.856 14:45:41 nvmf_rdma.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:07.856 14:45:41 nvmf_rdma.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.856 14:45:41 nvmf_rdma.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.856 14:45:41 nvmf_rdma.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.856 14:45:41 nvmf_rdma.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:11:07.856 14:45:41 nvmf_rdma.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:07.856 14:45:41 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:11:07.856 14:45:41 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:07.856 14:45:41 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:07.856 14:45:41 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:07.856 14:45:41 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:07.856 14:45:41 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:07.856 14:45:41 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:07.857 14:45:41 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:07.857 14:45:41 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:07.857 14:45:41 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:11:07.857 14:45:41 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:11:07.857 14:45:41 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:07.857 14:45:41 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:07.857 14:45:41 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:07.857 14:45:41 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:07.857 14:45:41 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:07.857 14:45:41 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:07.857 14:45:41 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:07.857 14:45:41 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:07.857 14:45:41 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:07.857 14:45:41 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:11:07.857 14:45:41 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:13.232 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:13.232 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:11:13.232 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:13.232 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:13.232 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:13.232 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:13.232 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:13.232 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:11:13.232 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:13.232 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:11:13.232 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:11:13.232 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:11:13.232 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:11:13.232 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:11:13.232 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:11:13.232 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:13.232 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:13.232 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:13.232 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:13.232 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:13.232 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:13.232 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:13.232 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:13.232 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:13.232 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:13.232 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:13.232 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:13.232 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:11:13.232 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:11:13.232 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:11:13.232 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:11:13.232 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:11:13.232 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:13.232 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:13.232 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:11:13.232 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:11:13.232 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:13.232 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:13.232 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:13.232 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:13.232 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:13.232 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:13.232 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:13.232 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:11:13.232 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:11:13.232 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:13.232 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:13.232 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:13.232 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:13.232 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:13.232 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:13.232 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:13.232 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:11:13.233 Found net devices under 0000:da:00.0: mlx_0_0 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:11:13.233 Found net devices under 0000:da:00.1: mlx_0_1 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@420 -- # rdma_device_init 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@58 -- # uname 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@62 -- # modprobe ib_cm 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@63 -- # modprobe ib_core 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@64 -- # modprobe ib_umad 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@66 -- # modprobe iw_cm 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@502 -- # allocate_nic_ips 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@73 -- # get_rdma_if_list 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@105 -- # continue 2 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@105 -- # continue 2 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:11:13.233 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:13.233 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:11:13.233 altname enp218s0f0np0 00:11:13.233 altname ens818f0np0 00:11:13.233 inet 192.168.100.8/24 scope global mlx_0_0 00:11:13.233 valid_lft forever preferred_lft forever 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:11:13.233 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:13.233 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:11:13.233 altname enp218s0f1np1 00:11:13.233 altname ens818f1np1 00:11:13.233 inet 192.168.100.9/24 scope global mlx_0_1 00:11:13.233 valid_lft forever preferred_lft forever 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@86 -- # get_rdma_if_list 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@105 -- # continue 2 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@105 -- # continue 2 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:11:13.233 192.168.100.9' 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:11:13.233 192.168.100.9' 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@457 -- # head -n 1 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:11:13.233 192.168.100.9' 00:11:13.233 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@458 -- # tail -n +2 00:11:13.234 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@458 -- # head -n 1 00:11:13.234 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:13.234 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:11:13.234 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:13.234 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:11:13.234 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:11:13.234 14:45:46 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:11:13.234 14:45:47 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:11:13.234 14:45:47 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:13.234 14:45:47 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:13.234 14:45:47 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:13.234 14:45:47 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=2761417 00:11:13.234 14:45:47 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 2761417 00:11:13.234 14:45:47 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:13.234 14:45:47 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 2761417 ']' 00:11:13.234 14:45:47 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:13.234 14:45:47 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:13.234 14:45:47 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:13.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:13.234 14:45:47 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:13.234 14:45:47 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:13.234 [2024-07-15 14:45:47.067718] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:11:13.234 [2024-07-15 14:45:47.067768] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:13.234 EAL: No free 2048 kB hugepages reported on node 1 00:11:13.234 [2024-07-15 14:45:47.123650] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:13.491 [2024-07-15 14:45:47.204665] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:13.491 [2024-07-15 14:45:47.204700] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:13.491 [2024-07-15 14:45:47.204706] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:13.491 [2024-07-15 14:45:47.204712] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:13.491 [2024-07-15 14:45:47.204717] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:13.491 [2024-07-15 14:45:47.204740] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:14.058 14:45:47 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:14.058 14:45:47 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:11:14.058 14:45:47 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:14.058 14:45:47 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:14.058 14:45:47 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:14.058 14:45:47 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:14.058 14:45:47 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:14.058 14:45:47 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:14.058 14:45:47 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:14.058 [2024-07-15 14:45:47.930652] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x119ac20/0x119f110) succeed. 00:11:14.058 [2024-07-15 14:45:47.939417] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x119c120/0x11e07a0) succeed. 00:11:14.322 14:45:47 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.322 14:45:47 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:14.322 14:45:47 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:14.322 14:45:47 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:14.322 14:45:47 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.322 14:45:47 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:14.322 14:45:47 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:14.322 14:45:47 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:14.322 [2024-07-15 14:45:48.000106] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:14.322 14:45:48 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.322 14:45:48 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:14.322 14:45:48 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:14.322 14:45:48 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:14.322 NULL1 00:11:14.322 14:45:48 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.322 14:45:48 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:11:14.322 14:45:48 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:14.322 14:45:48 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:14.322 14:45:48 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.322 14:45:48 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:14.322 14:45:48 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:14.322 14:45:48 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:14.322 14:45:48 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.322 14:45:48 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:14.322 [2024-07-15 14:45:48.055382] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:11:14.322 [2024-07-15 14:45:48.055427] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2761662 ] 00:11:14.322 EAL: No free 2048 kB hugepages reported on node 1 00:11:14.322 Attached to nqn.2016-06.io.spdk:cnode1 00:11:14.322 Namespace ID: 1 size: 1GB 00:11:14.322 fused_ordering(0) 00:11:14.322 fused_ordering(1) 00:11:14.322 fused_ordering(2) 00:11:14.322 fused_ordering(3) 00:11:14.322 fused_ordering(4) 00:11:14.322 fused_ordering(5) 00:11:14.322 fused_ordering(6) 00:11:14.322 fused_ordering(7) 00:11:14.322 fused_ordering(8) 00:11:14.322 fused_ordering(9) 00:11:14.322 fused_ordering(10) 00:11:14.322 fused_ordering(11) 00:11:14.322 fused_ordering(12) 00:11:14.322 fused_ordering(13) 00:11:14.322 fused_ordering(14) 00:11:14.322 fused_ordering(15) 00:11:14.322 fused_ordering(16) 00:11:14.322 fused_ordering(17) 00:11:14.322 fused_ordering(18) 00:11:14.322 fused_ordering(19) 00:11:14.322 fused_ordering(20) 00:11:14.322 fused_ordering(21) 00:11:14.322 fused_ordering(22) 00:11:14.322 fused_ordering(23) 00:11:14.322 fused_ordering(24) 00:11:14.322 fused_ordering(25) 00:11:14.322 fused_ordering(26) 00:11:14.322 fused_ordering(27) 00:11:14.322 fused_ordering(28) 00:11:14.322 fused_ordering(29) 00:11:14.322 fused_ordering(30) 00:11:14.322 fused_ordering(31) 00:11:14.322 fused_ordering(32) 00:11:14.322 fused_ordering(33) 00:11:14.322 fused_ordering(34) 00:11:14.322 fused_ordering(35) 00:11:14.322 fused_ordering(36) 00:11:14.322 fused_ordering(37) 00:11:14.322 fused_ordering(38) 00:11:14.322 fused_ordering(39) 00:11:14.322 fused_ordering(40) 00:11:14.322 fused_ordering(41) 00:11:14.322 fused_ordering(42) 00:11:14.322 fused_ordering(43) 00:11:14.322 fused_ordering(44) 00:11:14.322 fused_ordering(45) 00:11:14.322 fused_ordering(46) 00:11:14.322 fused_ordering(47) 00:11:14.322 fused_ordering(48) 00:11:14.322 fused_ordering(49) 00:11:14.322 fused_ordering(50) 00:11:14.322 fused_ordering(51) 00:11:14.322 fused_ordering(52) 00:11:14.322 fused_ordering(53) 00:11:14.322 fused_ordering(54) 00:11:14.322 fused_ordering(55) 00:11:14.322 fused_ordering(56) 00:11:14.322 fused_ordering(57) 00:11:14.322 fused_ordering(58) 00:11:14.322 fused_ordering(59) 00:11:14.322 fused_ordering(60) 00:11:14.322 fused_ordering(61) 00:11:14.322 fused_ordering(62) 00:11:14.322 fused_ordering(63) 00:11:14.322 fused_ordering(64) 00:11:14.322 fused_ordering(65) 00:11:14.322 fused_ordering(66) 00:11:14.322 fused_ordering(67) 00:11:14.322 fused_ordering(68) 00:11:14.322 fused_ordering(69) 00:11:14.322 fused_ordering(70) 00:11:14.322 fused_ordering(71) 00:11:14.322 fused_ordering(72) 00:11:14.322 fused_ordering(73) 00:11:14.322 fused_ordering(74) 00:11:14.322 fused_ordering(75) 00:11:14.322 fused_ordering(76) 00:11:14.322 fused_ordering(77) 00:11:14.322 fused_ordering(78) 00:11:14.322 fused_ordering(79) 00:11:14.322 fused_ordering(80) 00:11:14.322 fused_ordering(81) 00:11:14.322 fused_ordering(82) 00:11:14.322 fused_ordering(83) 00:11:14.322 fused_ordering(84) 00:11:14.322 fused_ordering(85) 00:11:14.322 fused_ordering(86) 00:11:14.322 fused_ordering(87) 00:11:14.322 fused_ordering(88) 00:11:14.322 fused_ordering(89) 00:11:14.322 fused_ordering(90) 00:11:14.322 fused_ordering(91) 00:11:14.322 fused_ordering(92) 00:11:14.322 fused_ordering(93) 00:11:14.322 fused_ordering(94) 00:11:14.322 fused_ordering(95) 00:11:14.322 fused_ordering(96) 00:11:14.322 fused_ordering(97) 00:11:14.322 fused_ordering(98) 00:11:14.322 fused_ordering(99) 00:11:14.322 fused_ordering(100) 00:11:14.322 fused_ordering(101) 00:11:14.322 fused_ordering(102) 00:11:14.322 fused_ordering(103) 00:11:14.322 fused_ordering(104) 00:11:14.322 fused_ordering(105) 00:11:14.322 fused_ordering(106) 00:11:14.322 fused_ordering(107) 00:11:14.322 fused_ordering(108) 00:11:14.322 fused_ordering(109) 00:11:14.322 fused_ordering(110) 00:11:14.322 fused_ordering(111) 00:11:14.322 fused_ordering(112) 00:11:14.322 fused_ordering(113) 00:11:14.322 fused_ordering(114) 00:11:14.322 fused_ordering(115) 00:11:14.322 fused_ordering(116) 00:11:14.322 fused_ordering(117) 00:11:14.322 fused_ordering(118) 00:11:14.322 fused_ordering(119) 00:11:14.322 fused_ordering(120) 00:11:14.322 fused_ordering(121) 00:11:14.322 fused_ordering(122) 00:11:14.322 fused_ordering(123) 00:11:14.322 fused_ordering(124) 00:11:14.322 fused_ordering(125) 00:11:14.322 fused_ordering(126) 00:11:14.322 fused_ordering(127) 00:11:14.322 fused_ordering(128) 00:11:14.322 fused_ordering(129) 00:11:14.322 fused_ordering(130) 00:11:14.322 fused_ordering(131) 00:11:14.322 fused_ordering(132) 00:11:14.322 fused_ordering(133) 00:11:14.322 fused_ordering(134) 00:11:14.322 fused_ordering(135) 00:11:14.322 fused_ordering(136) 00:11:14.322 fused_ordering(137) 00:11:14.322 fused_ordering(138) 00:11:14.322 fused_ordering(139) 00:11:14.322 fused_ordering(140) 00:11:14.322 fused_ordering(141) 00:11:14.322 fused_ordering(142) 00:11:14.322 fused_ordering(143) 00:11:14.322 fused_ordering(144) 00:11:14.322 fused_ordering(145) 00:11:14.322 fused_ordering(146) 00:11:14.322 fused_ordering(147) 00:11:14.322 fused_ordering(148) 00:11:14.322 fused_ordering(149) 00:11:14.322 fused_ordering(150) 00:11:14.322 fused_ordering(151) 00:11:14.322 fused_ordering(152) 00:11:14.322 fused_ordering(153) 00:11:14.322 fused_ordering(154) 00:11:14.322 fused_ordering(155) 00:11:14.322 fused_ordering(156) 00:11:14.322 fused_ordering(157) 00:11:14.322 fused_ordering(158) 00:11:14.322 fused_ordering(159) 00:11:14.322 fused_ordering(160) 00:11:14.322 fused_ordering(161) 00:11:14.322 fused_ordering(162) 00:11:14.322 fused_ordering(163) 00:11:14.322 fused_ordering(164) 00:11:14.322 fused_ordering(165) 00:11:14.322 fused_ordering(166) 00:11:14.322 fused_ordering(167) 00:11:14.322 fused_ordering(168) 00:11:14.322 fused_ordering(169) 00:11:14.322 fused_ordering(170) 00:11:14.322 fused_ordering(171) 00:11:14.322 fused_ordering(172) 00:11:14.322 fused_ordering(173) 00:11:14.322 fused_ordering(174) 00:11:14.322 fused_ordering(175) 00:11:14.322 fused_ordering(176) 00:11:14.322 fused_ordering(177) 00:11:14.322 fused_ordering(178) 00:11:14.322 fused_ordering(179) 00:11:14.322 fused_ordering(180) 00:11:14.322 fused_ordering(181) 00:11:14.322 fused_ordering(182) 00:11:14.322 fused_ordering(183) 00:11:14.322 fused_ordering(184) 00:11:14.323 fused_ordering(185) 00:11:14.323 fused_ordering(186) 00:11:14.323 fused_ordering(187) 00:11:14.323 fused_ordering(188) 00:11:14.323 fused_ordering(189) 00:11:14.323 fused_ordering(190) 00:11:14.323 fused_ordering(191) 00:11:14.323 fused_ordering(192) 00:11:14.323 fused_ordering(193) 00:11:14.323 fused_ordering(194) 00:11:14.323 fused_ordering(195) 00:11:14.323 fused_ordering(196) 00:11:14.323 fused_ordering(197) 00:11:14.323 fused_ordering(198) 00:11:14.323 fused_ordering(199) 00:11:14.323 fused_ordering(200) 00:11:14.323 fused_ordering(201) 00:11:14.323 fused_ordering(202) 00:11:14.323 fused_ordering(203) 00:11:14.323 fused_ordering(204) 00:11:14.323 fused_ordering(205) 00:11:14.582 fused_ordering(206) 00:11:14.582 fused_ordering(207) 00:11:14.582 fused_ordering(208) 00:11:14.582 fused_ordering(209) 00:11:14.582 fused_ordering(210) 00:11:14.582 fused_ordering(211) 00:11:14.582 fused_ordering(212) 00:11:14.582 fused_ordering(213) 00:11:14.582 fused_ordering(214) 00:11:14.582 fused_ordering(215) 00:11:14.582 fused_ordering(216) 00:11:14.582 fused_ordering(217) 00:11:14.582 fused_ordering(218) 00:11:14.582 fused_ordering(219) 00:11:14.582 fused_ordering(220) 00:11:14.582 fused_ordering(221) 00:11:14.582 fused_ordering(222) 00:11:14.582 fused_ordering(223) 00:11:14.582 fused_ordering(224) 00:11:14.582 fused_ordering(225) 00:11:14.582 fused_ordering(226) 00:11:14.582 fused_ordering(227) 00:11:14.582 fused_ordering(228) 00:11:14.582 fused_ordering(229) 00:11:14.582 fused_ordering(230) 00:11:14.582 fused_ordering(231) 00:11:14.582 fused_ordering(232) 00:11:14.582 fused_ordering(233) 00:11:14.582 fused_ordering(234) 00:11:14.582 fused_ordering(235) 00:11:14.582 fused_ordering(236) 00:11:14.582 fused_ordering(237) 00:11:14.582 fused_ordering(238) 00:11:14.582 fused_ordering(239) 00:11:14.582 fused_ordering(240) 00:11:14.582 fused_ordering(241) 00:11:14.582 fused_ordering(242) 00:11:14.582 fused_ordering(243) 00:11:14.582 fused_ordering(244) 00:11:14.582 fused_ordering(245) 00:11:14.582 fused_ordering(246) 00:11:14.582 fused_ordering(247) 00:11:14.582 fused_ordering(248) 00:11:14.582 fused_ordering(249) 00:11:14.582 fused_ordering(250) 00:11:14.582 fused_ordering(251) 00:11:14.582 fused_ordering(252) 00:11:14.582 fused_ordering(253) 00:11:14.582 fused_ordering(254) 00:11:14.582 fused_ordering(255) 00:11:14.582 fused_ordering(256) 00:11:14.582 fused_ordering(257) 00:11:14.582 fused_ordering(258) 00:11:14.582 fused_ordering(259) 00:11:14.582 fused_ordering(260) 00:11:14.582 fused_ordering(261) 00:11:14.582 fused_ordering(262) 00:11:14.582 fused_ordering(263) 00:11:14.582 fused_ordering(264) 00:11:14.582 fused_ordering(265) 00:11:14.582 fused_ordering(266) 00:11:14.582 fused_ordering(267) 00:11:14.582 fused_ordering(268) 00:11:14.582 fused_ordering(269) 00:11:14.582 fused_ordering(270) 00:11:14.582 fused_ordering(271) 00:11:14.582 fused_ordering(272) 00:11:14.582 fused_ordering(273) 00:11:14.582 fused_ordering(274) 00:11:14.582 fused_ordering(275) 00:11:14.582 fused_ordering(276) 00:11:14.582 fused_ordering(277) 00:11:14.582 fused_ordering(278) 00:11:14.582 fused_ordering(279) 00:11:14.582 fused_ordering(280) 00:11:14.582 fused_ordering(281) 00:11:14.582 fused_ordering(282) 00:11:14.582 fused_ordering(283) 00:11:14.582 fused_ordering(284) 00:11:14.582 fused_ordering(285) 00:11:14.582 fused_ordering(286) 00:11:14.582 fused_ordering(287) 00:11:14.582 fused_ordering(288) 00:11:14.582 fused_ordering(289) 00:11:14.582 fused_ordering(290) 00:11:14.582 fused_ordering(291) 00:11:14.582 fused_ordering(292) 00:11:14.582 fused_ordering(293) 00:11:14.582 fused_ordering(294) 00:11:14.582 fused_ordering(295) 00:11:14.582 fused_ordering(296) 00:11:14.582 fused_ordering(297) 00:11:14.582 fused_ordering(298) 00:11:14.582 fused_ordering(299) 00:11:14.582 fused_ordering(300) 00:11:14.582 fused_ordering(301) 00:11:14.582 fused_ordering(302) 00:11:14.582 fused_ordering(303) 00:11:14.582 fused_ordering(304) 00:11:14.582 fused_ordering(305) 00:11:14.582 fused_ordering(306) 00:11:14.582 fused_ordering(307) 00:11:14.582 fused_ordering(308) 00:11:14.582 fused_ordering(309) 00:11:14.582 fused_ordering(310) 00:11:14.582 fused_ordering(311) 00:11:14.582 fused_ordering(312) 00:11:14.582 fused_ordering(313) 00:11:14.582 fused_ordering(314) 00:11:14.582 fused_ordering(315) 00:11:14.582 fused_ordering(316) 00:11:14.582 fused_ordering(317) 00:11:14.582 fused_ordering(318) 00:11:14.582 fused_ordering(319) 00:11:14.582 fused_ordering(320) 00:11:14.582 fused_ordering(321) 00:11:14.582 fused_ordering(322) 00:11:14.582 fused_ordering(323) 00:11:14.582 fused_ordering(324) 00:11:14.582 fused_ordering(325) 00:11:14.582 fused_ordering(326) 00:11:14.582 fused_ordering(327) 00:11:14.582 fused_ordering(328) 00:11:14.582 fused_ordering(329) 00:11:14.582 fused_ordering(330) 00:11:14.582 fused_ordering(331) 00:11:14.582 fused_ordering(332) 00:11:14.582 fused_ordering(333) 00:11:14.582 fused_ordering(334) 00:11:14.582 fused_ordering(335) 00:11:14.582 fused_ordering(336) 00:11:14.582 fused_ordering(337) 00:11:14.582 fused_ordering(338) 00:11:14.582 fused_ordering(339) 00:11:14.582 fused_ordering(340) 00:11:14.582 fused_ordering(341) 00:11:14.582 fused_ordering(342) 00:11:14.582 fused_ordering(343) 00:11:14.582 fused_ordering(344) 00:11:14.582 fused_ordering(345) 00:11:14.582 fused_ordering(346) 00:11:14.582 fused_ordering(347) 00:11:14.582 fused_ordering(348) 00:11:14.582 fused_ordering(349) 00:11:14.582 fused_ordering(350) 00:11:14.582 fused_ordering(351) 00:11:14.582 fused_ordering(352) 00:11:14.582 fused_ordering(353) 00:11:14.582 fused_ordering(354) 00:11:14.582 fused_ordering(355) 00:11:14.582 fused_ordering(356) 00:11:14.582 fused_ordering(357) 00:11:14.582 fused_ordering(358) 00:11:14.582 fused_ordering(359) 00:11:14.582 fused_ordering(360) 00:11:14.582 fused_ordering(361) 00:11:14.582 fused_ordering(362) 00:11:14.582 fused_ordering(363) 00:11:14.582 fused_ordering(364) 00:11:14.582 fused_ordering(365) 00:11:14.582 fused_ordering(366) 00:11:14.582 fused_ordering(367) 00:11:14.582 fused_ordering(368) 00:11:14.582 fused_ordering(369) 00:11:14.582 fused_ordering(370) 00:11:14.582 fused_ordering(371) 00:11:14.582 fused_ordering(372) 00:11:14.582 fused_ordering(373) 00:11:14.582 fused_ordering(374) 00:11:14.582 fused_ordering(375) 00:11:14.582 fused_ordering(376) 00:11:14.582 fused_ordering(377) 00:11:14.582 fused_ordering(378) 00:11:14.582 fused_ordering(379) 00:11:14.582 fused_ordering(380) 00:11:14.582 fused_ordering(381) 00:11:14.582 fused_ordering(382) 00:11:14.582 fused_ordering(383) 00:11:14.582 fused_ordering(384) 00:11:14.582 fused_ordering(385) 00:11:14.582 fused_ordering(386) 00:11:14.582 fused_ordering(387) 00:11:14.582 fused_ordering(388) 00:11:14.582 fused_ordering(389) 00:11:14.582 fused_ordering(390) 00:11:14.582 fused_ordering(391) 00:11:14.582 fused_ordering(392) 00:11:14.582 fused_ordering(393) 00:11:14.582 fused_ordering(394) 00:11:14.582 fused_ordering(395) 00:11:14.582 fused_ordering(396) 00:11:14.582 fused_ordering(397) 00:11:14.582 fused_ordering(398) 00:11:14.582 fused_ordering(399) 00:11:14.582 fused_ordering(400) 00:11:14.582 fused_ordering(401) 00:11:14.582 fused_ordering(402) 00:11:14.582 fused_ordering(403) 00:11:14.582 fused_ordering(404) 00:11:14.582 fused_ordering(405) 00:11:14.582 fused_ordering(406) 00:11:14.582 fused_ordering(407) 00:11:14.582 fused_ordering(408) 00:11:14.582 fused_ordering(409) 00:11:14.582 fused_ordering(410) 00:11:14.582 fused_ordering(411) 00:11:14.582 fused_ordering(412) 00:11:14.582 fused_ordering(413) 00:11:14.582 fused_ordering(414) 00:11:14.582 fused_ordering(415) 00:11:14.582 fused_ordering(416) 00:11:14.582 fused_ordering(417) 00:11:14.582 fused_ordering(418) 00:11:14.582 fused_ordering(419) 00:11:14.582 fused_ordering(420) 00:11:14.582 fused_ordering(421) 00:11:14.582 fused_ordering(422) 00:11:14.582 fused_ordering(423) 00:11:14.582 fused_ordering(424) 00:11:14.582 fused_ordering(425) 00:11:14.582 fused_ordering(426) 00:11:14.582 fused_ordering(427) 00:11:14.582 fused_ordering(428) 00:11:14.582 fused_ordering(429) 00:11:14.582 fused_ordering(430) 00:11:14.582 fused_ordering(431) 00:11:14.582 fused_ordering(432) 00:11:14.582 fused_ordering(433) 00:11:14.582 fused_ordering(434) 00:11:14.582 fused_ordering(435) 00:11:14.582 fused_ordering(436) 00:11:14.582 fused_ordering(437) 00:11:14.582 fused_ordering(438) 00:11:14.582 fused_ordering(439) 00:11:14.582 fused_ordering(440) 00:11:14.582 fused_ordering(441) 00:11:14.582 fused_ordering(442) 00:11:14.582 fused_ordering(443) 00:11:14.582 fused_ordering(444) 00:11:14.582 fused_ordering(445) 00:11:14.582 fused_ordering(446) 00:11:14.582 fused_ordering(447) 00:11:14.582 fused_ordering(448) 00:11:14.582 fused_ordering(449) 00:11:14.582 fused_ordering(450) 00:11:14.582 fused_ordering(451) 00:11:14.582 fused_ordering(452) 00:11:14.582 fused_ordering(453) 00:11:14.582 fused_ordering(454) 00:11:14.582 fused_ordering(455) 00:11:14.582 fused_ordering(456) 00:11:14.582 fused_ordering(457) 00:11:14.582 fused_ordering(458) 00:11:14.582 fused_ordering(459) 00:11:14.582 fused_ordering(460) 00:11:14.582 fused_ordering(461) 00:11:14.582 fused_ordering(462) 00:11:14.582 fused_ordering(463) 00:11:14.582 fused_ordering(464) 00:11:14.582 fused_ordering(465) 00:11:14.582 fused_ordering(466) 00:11:14.582 fused_ordering(467) 00:11:14.582 fused_ordering(468) 00:11:14.582 fused_ordering(469) 00:11:14.582 fused_ordering(470) 00:11:14.582 fused_ordering(471) 00:11:14.582 fused_ordering(472) 00:11:14.582 fused_ordering(473) 00:11:14.582 fused_ordering(474) 00:11:14.582 fused_ordering(475) 00:11:14.582 fused_ordering(476) 00:11:14.582 fused_ordering(477) 00:11:14.582 fused_ordering(478) 00:11:14.582 fused_ordering(479) 00:11:14.582 fused_ordering(480) 00:11:14.582 fused_ordering(481) 00:11:14.582 fused_ordering(482) 00:11:14.582 fused_ordering(483) 00:11:14.582 fused_ordering(484) 00:11:14.582 fused_ordering(485) 00:11:14.582 fused_ordering(486) 00:11:14.582 fused_ordering(487) 00:11:14.582 fused_ordering(488) 00:11:14.582 fused_ordering(489) 00:11:14.582 fused_ordering(490) 00:11:14.582 fused_ordering(491) 00:11:14.582 fused_ordering(492) 00:11:14.582 fused_ordering(493) 00:11:14.582 fused_ordering(494) 00:11:14.582 fused_ordering(495) 00:11:14.582 fused_ordering(496) 00:11:14.582 fused_ordering(497) 00:11:14.582 fused_ordering(498) 00:11:14.582 fused_ordering(499) 00:11:14.582 fused_ordering(500) 00:11:14.582 fused_ordering(501) 00:11:14.582 fused_ordering(502) 00:11:14.582 fused_ordering(503) 00:11:14.582 fused_ordering(504) 00:11:14.582 fused_ordering(505) 00:11:14.582 fused_ordering(506) 00:11:14.582 fused_ordering(507) 00:11:14.582 fused_ordering(508) 00:11:14.582 fused_ordering(509) 00:11:14.582 fused_ordering(510) 00:11:14.582 fused_ordering(511) 00:11:14.582 fused_ordering(512) 00:11:14.582 fused_ordering(513) 00:11:14.582 fused_ordering(514) 00:11:14.582 fused_ordering(515) 00:11:14.582 fused_ordering(516) 00:11:14.582 fused_ordering(517) 00:11:14.582 fused_ordering(518) 00:11:14.582 fused_ordering(519) 00:11:14.582 fused_ordering(520) 00:11:14.582 fused_ordering(521) 00:11:14.582 fused_ordering(522) 00:11:14.582 fused_ordering(523) 00:11:14.582 fused_ordering(524) 00:11:14.582 fused_ordering(525) 00:11:14.582 fused_ordering(526) 00:11:14.582 fused_ordering(527) 00:11:14.582 fused_ordering(528) 00:11:14.582 fused_ordering(529) 00:11:14.582 fused_ordering(530) 00:11:14.582 fused_ordering(531) 00:11:14.582 fused_ordering(532) 00:11:14.582 fused_ordering(533) 00:11:14.582 fused_ordering(534) 00:11:14.582 fused_ordering(535) 00:11:14.582 fused_ordering(536) 00:11:14.582 fused_ordering(537) 00:11:14.582 fused_ordering(538) 00:11:14.582 fused_ordering(539) 00:11:14.582 fused_ordering(540) 00:11:14.582 fused_ordering(541) 00:11:14.582 fused_ordering(542) 00:11:14.582 fused_ordering(543) 00:11:14.582 fused_ordering(544) 00:11:14.582 fused_ordering(545) 00:11:14.582 fused_ordering(546) 00:11:14.582 fused_ordering(547) 00:11:14.582 fused_ordering(548) 00:11:14.582 fused_ordering(549) 00:11:14.582 fused_ordering(550) 00:11:14.582 fused_ordering(551) 00:11:14.582 fused_ordering(552) 00:11:14.582 fused_ordering(553) 00:11:14.582 fused_ordering(554) 00:11:14.582 fused_ordering(555) 00:11:14.582 fused_ordering(556) 00:11:14.582 fused_ordering(557) 00:11:14.582 fused_ordering(558) 00:11:14.582 fused_ordering(559) 00:11:14.582 fused_ordering(560) 00:11:14.582 fused_ordering(561) 00:11:14.582 fused_ordering(562) 00:11:14.582 fused_ordering(563) 00:11:14.582 fused_ordering(564) 00:11:14.582 fused_ordering(565) 00:11:14.582 fused_ordering(566) 00:11:14.582 fused_ordering(567) 00:11:14.582 fused_ordering(568) 00:11:14.582 fused_ordering(569) 00:11:14.582 fused_ordering(570) 00:11:14.582 fused_ordering(571) 00:11:14.582 fused_ordering(572) 00:11:14.582 fused_ordering(573) 00:11:14.582 fused_ordering(574) 00:11:14.582 fused_ordering(575) 00:11:14.582 fused_ordering(576) 00:11:14.582 fused_ordering(577) 00:11:14.582 fused_ordering(578) 00:11:14.582 fused_ordering(579) 00:11:14.582 fused_ordering(580) 00:11:14.582 fused_ordering(581) 00:11:14.582 fused_ordering(582) 00:11:14.582 fused_ordering(583) 00:11:14.582 fused_ordering(584) 00:11:14.582 fused_ordering(585) 00:11:14.582 fused_ordering(586) 00:11:14.582 fused_ordering(587) 00:11:14.582 fused_ordering(588) 00:11:14.582 fused_ordering(589) 00:11:14.582 fused_ordering(590) 00:11:14.582 fused_ordering(591) 00:11:14.582 fused_ordering(592) 00:11:14.582 fused_ordering(593) 00:11:14.582 fused_ordering(594) 00:11:14.582 fused_ordering(595) 00:11:14.582 fused_ordering(596) 00:11:14.582 fused_ordering(597) 00:11:14.582 fused_ordering(598) 00:11:14.582 fused_ordering(599) 00:11:14.582 fused_ordering(600) 00:11:14.582 fused_ordering(601) 00:11:14.582 fused_ordering(602) 00:11:14.582 fused_ordering(603) 00:11:14.582 fused_ordering(604) 00:11:14.582 fused_ordering(605) 00:11:14.582 fused_ordering(606) 00:11:14.582 fused_ordering(607) 00:11:14.582 fused_ordering(608) 00:11:14.582 fused_ordering(609) 00:11:14.582 fused_ordering(610) 00:11:14.582 fused_ordering(611) 00:11:14.582 fused_ordering(612) 00:11:14.582 fused_ordering(613) 00:11:14.582 fused_ordering(614) 00:11:14.582 fused_ordering(615) 00:11:14.841 fused_ordering(616) 00:11:14.841 fused_ordering(617) 00:11:14.841 fused_ordering(618) 00:11:14.841 fused_ordering(619) 00:11:14.841 fused_ordering(620) 00:11:14.841 fused_ordering(621) 00:11:14.841 fused_ordering(622) 00:11:14.841 fused_ordering(623) 00:11:14.841 fused_ordering(624) 00:11:14.841 fused_ordering(625) 00:11:14.841 fused_ordering(626) 00:11:14.841 fused_ordering(627) 00:11:14.841 fused_ordering(628) 00:11:14.841 fused_ordering(629) 00:11:14.841 fused_ordering(630) 00:11:14.841 fused_ordering(631) 00:11:14.841 fused_ordering(632) 00:11:14.841 fused_ordering(633) 00:11:14.841 fused_ordering(634) 00:11:14.841 fused_ordering(635) 00:11:14.841 fused_ordering(636) 00:11:14.841 fused_ordering(637) 00:11:14.841 fused_ordering(638) 00:11:14.841 fused_ordering(639) 00:11:14.841 fused_ordering(640) 00:11:14.841 fused_ordering(641) 00:11:14.841 fused_ordering(642) 00:11:14.841 fused_ordering(643) 00:11:14.841 fused_ordering(644) 00:11:14.841 fused_ordering(645) 00:11:14.841 fused_ordering(646) 00:11:14.841 fused_ordering(647) 00:11:14.841 fused_ordering(648) 00:11:14.841 fused_ordering(649) 00:11:14.841 fused_ordering(650) 00:11:14.841 fused_ordering(651) 00:11:14.841 fused_ordering(652) 00:11:14.841 fused_ordering(653) 00:11:14.841 fused_ordering(654) 00:11:14.841 fused_ordering(655) 00:11:14.841 fused_ordering(656) 00:11:14.841 fused_ordering(657) 00:11:14.841 fused_ordering(658) 00:11:14.841 fused_ordering(659) 00:11:14.841 fused_ordering(660) 00:11:14.841 fused_ordering(661) 00:11:14.841 fused_ordering(662) 00:11:14.841 fused_ordering(663) 00:11:14.841 fused_ordering(664) 00:11:14.841 fused_ordering(665) 00:11:14.841 fused_ordering(666) 00:11:14.841 fused_ordering(667) 00:11:14.841 fused_ordering(668) 00:11:14.841 fused_ordering(669) 00:11:14.841 fused_ordering(670) 00:11:14.841 fused_ordering(671) 00:11:14.841 fused_ordering(672) 00:11:14.841 fused_ordering(673) 00:11:14.841 fused_ordering(674) 00:11:14.841 fused_ordering(675) 00:11:14.841 fused_ordering(676) 00:11:14.841 fused_ordering(677) 00:11:14.841 fused_ordering(678) 00:11:14.841 fused_ordering(679) 00:11:14.841 fused_ordering(680) 00:11:14.841 fused_ordering(681) 00:11:14.841 fused_ordering(682) 00:11:14.841 fused_ordering(683) 00:11:14.841 fused_ordering(684) 00:11:14.841 fused_ordering(685) 00:11:14.841 fused_ordering(686) 00:11:14.841 fused_ordering(687) 00:11:14.841 fused_ordering(688) 00:11:14.841 fused_ordering(689) 00:11:14.841 fused_ordering(690) 00:11:14.841 fused_ordering(691) 00:11:14.841 fused_ordering(692) 00:11:14.841 fused_ordering(693) 00:11:14.841 fused_ordering(694) 00:11:14.841 fused_ordering(695) 00:11:14.841 fused_ordering(696) 00:11:14.841 fused_ordering(697) 00:11:14.841 fused_ordering(698) 00:11:14.841 fused_ordering(699) 00:11:14.841 fused_ordering(700) 00:11:14.841 fused_ordering(701) 00:11:14.841 fused_ordering(702) 00:11:14.841 fused_ordering(703) 00:11:14.841 fused_ordering(704) 00:11:14.841 fused_ordering(705) 00:11:14.841 fused_ordering(706) 00:11:14.841 fused_ordering(707) 00:11:14.841 fused_ordering(708) 00:11:14.841 fused_ordering(709) 00:11:14.841 fused_ordering(710) 00:11:14.841 fused_ordering(711) 00:11:14.841 fused_ordering(712) 00:11:14.841 fused_ordering(713) 00:11:14.841 fused_ordering(714) 00:11:14.841 fused_ordering(715) 00:11:14.841 fused_ordering(716) 00:11:14.841 fused_ordering(717) 00:11:14.841 fused_ordering(718) 00:11:14.841 fused_ordering(719) 00:11:14.841 fused_ordering(720) 00:11:14.841 fused_ordering(721) 00:11:14.841 fused_ordering(722) 00:11:14.841 fused_ordering(723) 00:11:14.841 fused_ordering(724) 00:11:14.841 fused_ordering(725) 00:11:14.841 fused_ordering(726) 00:11:14.841 fused_ordering(727) 00:11:14.841 fused_ordering(728) 00:11:14.841 fused_ordering(729) 00:11:14.841 fused_ordering(730) 00:11:14.841 fused_ordering(731) 00:11:14.841 fused_ordering(732) 00:11:14.841 fused_ordering(733) 00:11:14.841 fused_ordering(734) 00:11:14.841 fused_ordering(735) 00:11:14.841 fused_ordering(736) 00:11:14.841 fused_ordering(737) 00:11:14.841 fused_ordering(738) 00:11:14.841 fused_ordering(739) 00:11:14.841 fused_ordering(740) 00:11:14.841 fused_ordering(741) 00:11:14.841 fused_ordering(742) 00:11:14.841 fused_ordering(743) 00:11:14.841 fused_ordering(744) 00:11:14.841 fused_ordering(745) 00:11:14.841 fused_ordering(746) 00:11:14.841 fused_ordering(747) 00:11:14.841 fused_ordering(748) 00:11:14.841 fused_ordering(749) 00:11:14.841 fused_ordering(750) 00:11:14.841 fused_ordering(751) 00:11:14.841 fused_ordering(752) 00:11:14.841 fused_ordering(753) 00:11:14.841 fused_ordering(754) 00:11:14.841 fused_ordering(755) 00:11:14.841 fused_ordering(756) 00:11:14.841 fused_ordering(757) 00:11:14.841 fused_ordering(758) 00:11:14.841 fused_ordering(759) 00:11:14.841 fused_ordering(760) 00:11:14.841 fused_ordering(761) 00:11:14.841 fused_ordering(762) 00:11:14.841 fused_ordering(763) 00:11:14.841 fused_ordering(764) 00:11:14.841 fused_ordering(765) 00:11:14.841 fused_ordering(766) 00:11:14.841 fused_ordering(767) 00:11:14.841 fused_ordering(768) 00:11:14.841 fused_ordering(769) 00:11:14.841 fused_ordering(770) 00:11:14.841 fused_ordering(771) 00:11:14.841 fused_ordering(772) 00:11:14.841 fused_ordering(773) 00:11:14.841 fused_ordering(774) 00:11:14.841 fused_ordering(775) 00:11:14.841 fused_ordering(776) 00:11:14.841 fused_ordering(777) 00:11:14.841 fused_ordering(778) 00:11:14.841 fused_ordering(779) 00:11:14.841 fused_ordering(780) 00:11:14.841 fused_ordering(781) 00:11:14.841 fused_ordering(782) 00:11:14.841 fused_ordering(783) 00:11:14.841 fused_ordering(784) 00:11:14.841 fused_ordering(785) 00:11:14.841 fused_ordering(786) 00:11:14.841 fused_ordering(787) 00:11:14.841 fused_ordering(788) 00:11:14.841 fused_ordering(789) 00:11:14.841 fused_ordering(790) 00:11:14.841 fused_ordering(791) 00:11:14.841 fused_ordering(792) 00:11:14.841 fused_ordering(793) 00:11:14.841 fused_ordering(794) 00:11:14.841 fused_ordering(795) 00:11:14.841 fused_ordering(796) 00:11:14.841 fused_ordering(797) 00:11:14.841 fused_ordering(798) 00:11:14.841 fused_ordering(799) 00:11:14.841 fused_ordering(800) 00:11:14.841 fused_ordering(801) 00:11:14.841 fused_ordering(802) 00:11:14.841 fused_ordering(803) 00:11:14.841 fused_ordering(804) 00:11:14.841 fused_ordering(805) 00:11:14.841 fused_ordering(806) 00:11:14.841 fused_ordering(807) 00:11:14.841 fused_ordering(808) 00:11:14.841 fused_ordering(809) 00:11:14.841 fused_ordering(810) 00:11:14.841 fused_ordering(811) 00:11:14.841 fused_ordering(812) 00:11:14.841 fused_ordering(813) 00:11:14.841 fused_ordering(814) 00:11:14.841 fused_ordering(815) 00:11:14.841 fused_ordering(816) 00:11:14.841 fused_ordering(817) 00:11:14.842 fused_ordering(818) 00:11:14.842 fused_ordering(819) 00:11:14.842 fused_ordering(820) 00:11:14.842 fused_ordering(821) 00:11:14.842 fused_ordering(822) 00:11:14.842 fused_ordering(823) 00:11:14.842 fused_ordering(824) 00:11:14.842 fused_ordering(825) 00:11:14.842 fused_ordering(826) 00:11:14.842 fused_ordering(827) 00:11:14.842 fused_ordering(828) 00:11:14.842 fused_ordering(829) 00:11:14.842 fused_ordering(830) 00:11:14.842 fused_ordering(831) 00:11:14.842 fused_ordering(832) 00:11:14.842 fused_ordering(833) 00:11:14.842 fused_ordering(834) 00:11:14.842 fused_ordering(835) 00:11:14.842 fused_ordering(836) 00:11:14.842 fused_ordering(837) 00:11:14.842 fused_ordering(838) 00:11:14.842 fused_ordering(839) 00:11:14.842 fused_ordering(840) 00:11:14.842 fused_ordering(841) 00:11:14.842 fused_ordering(842) 00:11:14.842 fused_ordering(843) 00:11:14.842 fused_ordering(844) 00:11:14.842 fused_ordering(845) 00:11:14.842 fused_ordering(846) 00:11:14.842 fused_ordering(847) 00:11:14.842 fused_ordering(848) 00:11:14.842 fused_ordering(849) 00:11:14.842 fused_ordering(850) 00:11:14.842 fused_ordering(851) 00:11:14.842 fused_ordering(852) 00:11:14.842 fused_ordering(853) 00:11:14.842 fused_ordering(854) 00:11:14.842 fused_ordering(855) 00:11:14.842 fused_ordering(856) 00:11:14.842 fused_ordering(857) 00:11:14.842 fused_ordering(858) 00:11:14.842 fused_ordering(859) 00:11:14.842 fused_ordering(860) 00:11:14.842 fused_ordering(861) 00:11:14.842 fused_ordering(862) 00:11:14.842 fused_ordering(863) 00:11:14.842 fused_ordering(864) 00:11:14.842 fused_ordering(865) 00:11:14.842 fused_ordering(866) 00:11:14.842 fused_ordering(867) 00:11:14.842 fused_ordering(868) 00:11:14.842 fused_ordering(869) 00:11:14.842 fused_ordering(870) 00:11:14.842 fused_ordering(871) 00:11:14.842 fused_ordering(872) 00:11:14.842 fused_ordering(873) 00:11:14.842 fused_ordering(874) 00:11:14.842 fused_ordering(875) 00:11:14.842 fused_ordering(876) 00:11:14.842 fused_ordering(877) 00:11:14.842 fused_ordering(878) 00:11:14.842 fused_ordering(879) 00:11:14.842 fused_ordering(880) 00:11:14.842 fused_ordering(881) 00:11:14.842 fused_ordering(882) 00:11:14.842 fused_ordering(883) 00:11:14.842 fused_ordering(884) 00:11:14.842 fused_ordering(885) 00:11:14.842 fused_ordering(886) 00:11:14.842 fused_ordering(887) 00:11:14.842 fused_ordering(888) 00:11:14.842 fused_ordering(889) 00:11:14.842 fused_ordering(890) 00:11:14.842 fused_ordering(891) 00:11:14.842 fused_ordering(892) 00:11:14.842 fused_ordering(893) 00:11:14.842 fused_ordering(894) 00:11:14.842 fused_ordering(895) 00:11:14.842 fused_ordering(896) 00:11:14.842 fused_ordering(897) 00:11:14.842 fused_ordering(898) 00:11:14.842 fused_ordering(899) 00:11:14.842 fused_ordering(900) 00:11:14.842 fused_ordering(901) 00:11:14.842 fused_ordering(902) 00:11:14.842 fused_ordering(903) 00:11:14.842 fused_ordering(904) 00:11:14.842 fused_ordering(905) 00:11:14.842 fused_ordering(906) 00:11:14.842 fused_ordering(907) 00:11:14.842 fused_ordering(908) 00:11:14.842 fused_ordering(909) 00:11:14.842 fused_ordering(910) 00:11:14.842 fused_ordering(911) 00:11:14.842 fused_ordering(912) 00:11:14.842 fused_ordering(913) 00:11:14.842 fused_ordering(914) 00:11:14.842 fused_ordering(915) 00:11:14.842 fused_ordering(916) 00:11:14.842 fused_ordering(917) 00:11:14.842 fused_ordering(918) 00:11:14.842 fused_ordering(919) 00:11:14.842 fused_ordering(920) 00:11:14.842 fused_ordering(921) 00:11:14.842 fused_ordering(922) 00:11:14.842 fused_ordering(923) 00:11:14.842 fused_ordering(924) 00:11:14.842 fused_ordering(925) 00:11:14.842 fused_ordering(926) 00:11:14.842 fused_ordering(927) 00:11:14.842 fused_ordering(928) 00:11:14.842 fused_ordering(929) 00:11:14.842 fused_ordering(930) 00:11:14.842 fused_ordering(931) 00:11:14.842 fused_ordering(932) 00:11:14.842 fused_ordering(933) 00:11:14.842 fused_ordering(934) 00:11:14.842 fused_ordering(935) 00:11:14.842 fused_ordering(936) 00:11:14.842 fused_ordering(937) 00:11:14.842 fused_ordering(938) 00:11:14.842 fused_ordering(939) 00:11:14.842 fused_ordering(940) 00:11:14.842 fused_ordering(941) 00:11:14.842 fused_ordering(942) 00:11:14.842 fused_ordering(943) 00:11:14.842 fused_ordering(944) 00:11:14.842 fused_ordering(945) 00:11:14.842 fused_ordering(946) 00:11:14.842 fused_ordering(947) 00:11:14.842 fused_ordering(948) 00:11:14.842 fused_ordering(949) 00:11:14.842 fused_ordering(950) 00:11:14.842 fused_ordering(951) 00:11:14.842 fused_ordering(952) 00:11:14.842 fused_ordering(953) 00:11:14.842 fused_ordering(954) 00:11:14.842 fused_ordering(955) 00:11:14.842 fused_ordering(956) 00:11:14.842 fused_ordering(957) 00:11:14.842 fused_ordering(958) 00:11:14.842 fused_ordering(959) 00:11:14.842 fused_ordering(960) 00:11:14.842 fused_ordering(961) 00:11:14.842 fused_ordering(962) 00:11:14.842 fused_ordering(963) 00:11:14.842 fused_ordering(964) 00:11:14.842 fused_ordering(965) 00:11:14.842 fused_ordering(966) 00:11:14.842 fused_ordering(967) 00:11:14.842 fused_ordering(968) 00:11:14.842 fused_ordering(969) 00:11:14.842 fused_ordering(970) 00:11:14.842 fused_ordering(971) 00:11:14.842 fused_ordering(972) 00:11:14.842 fused_ordering(973) 00:11:14.842 fused_ordering(974) 00:11:14.842 fused_ordering(975) 00:11:14.842 fused_ordering(976) 00:11:14.842 fused_ordering(977) 00:11:14.842 fused_ordering(978) 00:11:14.842 fused_ordering(979) 00:11:14.842 fused_ordering(980) 00:11:14.842 fused_ordering(981) 00:11:14.842 fused_ordering(982) 00:11:14.842 fused_ordering(983) 00:11:14.842 fused_ordering(984) 00:11:14.842 fused_ordering(985) 00:11:14.842 fused_ordering(986) 00:11:14.842 fused_ordering(987) 00:11:14.842 fused_ordering(988) 00:11:14.842 fused_ordering(989) 00:11:14.842 fused_ordering(990) 00:11:14.842 fused_ordering(991) 00:11:14.842 fused_ordering(992) 00:11:14.842 fused_ordering(993) 00:11:14.842 fused_ordering(994) 00:11:14.842 fused_ordering(995) 00:11:14.842 fused_ordering(996) 00:11:14.842 fused_ordering(997) 00:11:14.842 fused_ordering(998) 00:11:14.842 fused_ordering(999) 00:11:14.842 fused_ordering(1000) 00:11:14.842 fused_ordering(1001) 00:11:14.842 fused_ordering(1002) 00:11:14.842 fused_ordering(1003) 00:11:14.842 fused_ordering(1004) 00:11:14.842 fused_ordering(1005) 00:11:14.842 fused_ordering(1006) 00:11:14.842 fused_ordering(1007) 00:11:14.842 fused_ordering(1008) 00:11:14.842 fused_ordering(1009) 00:11:14.842 fused_ordering(1010) 00:11:14.842 fused_ordering(1011) 00:11:14.842 fused_ordering(1012) 00:11:14.842 fused_ordering(1013) 00:11:14.842 fused_ordering(1014) 00:11:14.842 fused_ordering(1015) 00:11:14.842 fused_ordering(1016) 00:11:14.842 fused_ordering(1017) 00:11:14.842 fused_ordering(1018) 00:11:14.842 fused_ordering(1019) 00:11:14.842 fused_ordering(1020) 00:11:14.842 fused_ordering(1021) 00:11:14.842 fused_ordering(1022) 00:11:14.842 fused_ordering(1023) 00:11:14.842 14:45:48 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:11:14.842 14:45:48 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:11:14.842 14:45:48 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:14.842 14:45:48 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:11:14.842 14:45:48 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:11:14.842 14:45:48 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:11:14.842 14:45:48 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:11:14.842 14:45:48 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:14.842 14:45:48 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:11:14.842 rmmod nvme_rdma 00:11:14.842 rmmod nvme_fabrics 00:11:14.842 14:45:48 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:14.842 14:45:48 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:11:14.842 14:45:48 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:11:14.842 14:45:48 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 2761417 ']' 00:11:14.842 14:45:48 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 2761417 00:11:14.842 14:45:48 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 2761417 ']' 00:11:14.842 14:45:48 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 2761417 00:11:14.842 14:45:48 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:11:14.842 14:45:48 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:14.842 14:45:48 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2761417 00:11:15.101 14:45:48 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:15.101 14:45:48 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:15.101 14:45:48 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2761417' 00:11:15.101 killing process with pid 2761417 00:11:15.101 14:45:48 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 2761417 00:11:15.101 14:45:48 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 2761417 00:11:15.101 14:45:49 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:15.101 14:45:49 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:11:15.101 00:11:15.101 real 0m7.547s 00:11:15.101 user 0m4.298s 00:11:15.101 sys 0m4.455s 00:11:15.101 14:45:49 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:15.101 14:45:49 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:15.101 ************************************ 00:11:15.101 END TEST nvmf_fused_ordering 00:11:15.101 ************************************ 00:11:15.361 14:45:49 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:11:15.361 14:45:49 nvmf_rdma -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:11:15.361 14:45:49 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:15.361 14:45:49 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:15.361 14:45:49 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:11:15.361 ************************************ 00:11:15.361 START TEST nvmf_delete_subsystem 00:11:15.361 ************************************ 00:11:15.361 14:45:49 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:11:15.361 * Looking for test storage... 00:11:15.361 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:15.361 14:45:49 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:15.361 14:45:49 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:11:15.361 14:45:49 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:15.361 14:45:49 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:15.361 14:45:49 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:15.361 14:45:49 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:15.361 14:45:49 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:15.361 14:45:49 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:15.361 14:45:49 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:15.361 14:45:49 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:15.361 14:45:49 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:15.361 14:45:49 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:15.361 14:45:49 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:11:15.361 14:45:49 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:11:15.361 14:45:49 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:15.361 14:45:49 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:15.361 14:45:49 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:15.361 14:45:49 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:15.361 14:45:49 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:15.361 14:45:49 nvmf_rdma.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:15.361 14:45:49 nvmf_rdma.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:15.361 14:45:49 nvmf_rdma.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:15.361 14:45:49 nvmf_rdma.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.361 14:45:49 nvmf_rdma.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.361 14:45:49 nvmf_rdma.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.361 14:45:49 nvmf_rdma.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:11:15.361 14:45:49 nvmf_rdma.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.361 14:45:49 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:11:15.361 14:45:49 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:15.361 14:45:49 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:15.361 14:45:49 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:15.361 14:45:49 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:15.361 14:45:49 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:15.361 14:45:49 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:15.361 14:45:49 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:15.361 14:45:49 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:15.361 14:45:49 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:11:15.361 14:45:49 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:11:15.361 14:45:49 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:15.361 14:45:49 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:15.361 14:45:49 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:15.361 14:45:49 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:15.361 14:45:49 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:15.361 14:45:49 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:15.361 14:45:49 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:15.361 14:45:49 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:15.361 14:45:49 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:15.361 14:45:49 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:11:15.361 14:45:49 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:20.630 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:20.630 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:11:20.630 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:20.630 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:20.630 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:20.630 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:20.630 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:20.630 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:11:20.630 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:20.630 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:11:20.630 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:11:20.630 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:11:20.630 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:11:20.630 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:11:20.630 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:11:20.630 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:20.630 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:20.630 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:20.630 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:20.630 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:20.630 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:20.630 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:20.630 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:20.630 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:20.630 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:20.630 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:20.630 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:20.630 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:11:20.630 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:11:20.630 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:11:20.630 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:11:20.630 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:11:20.630 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:20.630 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:20.630 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:11:20.630 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:11:20.630 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:20.631 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:20.631 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:20.631 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:20.631 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:20.631 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:20.631 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:20.631 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:11:20.631 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:11:20.631 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:20.631 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:20.631 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:20.631 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:20.631 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:20.631 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:20.631 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:20.631 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:11:20.631 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:20.631 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:20.631 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:20.631 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:20.631 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:20.631 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:11:20.631 Found net devices under 0000:da:00.0: mlx_0_0 00:11:20.631 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:20.631 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:20.631 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:20.631 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:20.631 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:20.631 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:20.631 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:11:20.631 Found net devices under 0000:da:00.1: mlx_0_1 00:11:20.631 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:20.631 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:20.631 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:11:20.631 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:20.631 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:11:20.631 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:11:20.631 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # rdma_device_init 00:11:20.631 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:11:20.631 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # uname 00:11:20.631 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:11:20.631 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@62 -- # modprobe ib_cm 00:11:20.631 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@63 -- # modprobe ib_core 00:11:20.631 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@64 -- # modprobe ib_umad 00:11:20.631 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:11:20.631 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@66 -- # modprobe iw_cm 00:11:20.631 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:11:20.631 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:11:20.631 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # allocate_nic_ips 00:11:20.631 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:20.631 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@73 -- # get_rdma_if_list 00:11:20.631 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:20.631 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:20.631 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:20.631 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:20.890 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:20.890 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:20.890 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:20.890 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:20.890 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:20.890 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # continue 2 00:11:20.890 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:20.890 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:20.890 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:20.890 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:20.890 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:20.890 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:20.890 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # continue 2 00:11:20.890 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:20.890 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:11:20.890 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:20.890 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:20.890 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:20.890 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:20.890 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:11:20.890 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:11:20.890 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:11:20.890 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:20.890 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:11:20.890 altname enp218s0f0np0 00:11:20.890 altname ens818f0np0 00:11:20.890 inet 192.168.100.8/24 scope global mlx_0_0 00:11:20.890 valid_lft forever preferred_lft forever 00:11:20.890 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:20.890 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:11:20.890 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:20.890 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:20.890 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:20.890 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:20.890 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:11:20.890 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:11:20.890 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:11:20.891 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:20.891 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:11:20.891 altname enp218s0f1np1 00:11:20.891 altname ens818f1np1 00:11:20.891 inet 192.168.100.9/24 scope global mlx_0_1 00:11:20.891 valid_lft forever preferred_lft forever 00:11:20.891 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:11:20.891 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:20.891 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:20.891 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:11:20.891 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:11:20.891 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@86 -- # get_rdma_if_list 00:11:20.891 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:20.891 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:20.891 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:20.891 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:20.891 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:20.891 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:20.891 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:20.891 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:20.891 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:20.891 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # continue 2 00:11:20.891 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:20.891 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:20.891 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:20.891 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:20.891 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:20.891 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:20.891 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # continue 2 00:11:20.891 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:20.891 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:11:20.891 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:20.891 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:20.891 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:20.891 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:20.891 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:20.891 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:11:20.891 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:20.891 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:20.891 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:20.891 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:20.891 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:11:20.891 192.168.100.9' 00:11:20.891 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:11:20.891 192.168.100.9' 00:11:20.891 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@457 -- # head -n 1 00:11:20.891 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:20.891 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:11:20.891 192.168.100.9' 00:11:20.891 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # tail -n +2 00:11:20.891 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # head -n 1 00:11:20.891 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:20.891 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:11:20.891 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:20.891 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:11:20.891 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:11:20.891 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:11:20.891 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:11:20.891 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:20.891 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:20.891 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:20.891 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=2764942 00:11:20.891 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 2764942 00:11:20.891 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 2764942 ']' 00:11:20.891 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:20.891 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:20.891 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:20.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:20.891 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:20.891 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:20.891 14:45:54 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:11:20.891 [2024-07-15 14:45:54.723143] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:11:20.891 [2024-07-15 14:45:54.723187] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:20.891 EAL: No free 2048 kB hugepages reported on node 1 00:11:20.891 [2024-07-15 14:45:54.779513] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:21.150 [2024-07-15 14:45:54.860970] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:21.150 [2024-07-15 14:45:54.861006] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:21.150 [2024-07-15 14:45:54.861012] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:21.150 [2024-07-15 14:45:54.861018] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:21.150 [2024-07-15 14:45:54.861023] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:21.150 [2024-07-15 14:45:54.861057] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:21.150 [2024-07-15 14:45:54.861061] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:21.717 14:45:55 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:21.717 14:45:55 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:11:21.717 14:45:55 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:21.717 14:45:55 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:21.717 14:45:55 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:21.717 14:45:55 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:21.717 14:45:55 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:21.717 14:45:55 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:21.717 14:45:55 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:21.717 [2024-07-15 14:45:55.567852] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x70f3c0/0x7138b0) succeed. 00:11:21.717 [2024-07-15 14:45:55.576627] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x710870/0x754f40) succeed. 00:11:21.977 14:45:55 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:21.977 14:45:55 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:21.977 14:45:55 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:21.977 14:45:55 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:21.977 14:45:55 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:21.977 14:45:55 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:21.977 14:45:55 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:21.977 14:45:55 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:21.977 [2024-07-15 14:45:55.667582] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:21.977 14:45:55 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:21.977 14:45:55 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:21.977 14:45:55 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:21.977 14:45:55 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:21.977 NULL1 00:11:21.977 14:45:55 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:21.977 14:45:55 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:21.977 14:45:55 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:21.977 14:45:55 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:21.977 Delay0 00:11:21.977 14:45:55 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:21.977 14:45:55 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:21.977 14:45:55 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:21.977 14:45:55 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:21.977 14:45:55 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:21.977 14:45:55 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2764990 00:11:21.977 14:45:55 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:11:21.977 14:45:55 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:21.977 EAL: No free 2048 kB hugepages reported on node 1 00:11:21.977 [2024-07-15 14:45:55.764442] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:23.882 14:45:57 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:23.882 14:45:57 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:23.882 14:45:57 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:25.258 NVMe io qpair process completion error 00:11:25.258 NVMe io qpair process completion error 00:11:25.258 NVMe io qpair process completion error 00:11:25.258 NVMe io qpair process completion error 00:11:25.258 NVMe io qpair process completion error 00:11:25.258 NVMe io qpair process completion error 00:11:25.258 14:45:58 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:25.258 14:45:58 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:11:25.258 14:45:58 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2764990 00:11:25.258 14:45:58 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:11:25.516 14:45:59 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:11:25.516 14:45:59 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2764990 00:11:25.516 14:45:59 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:11:26.083 Read completed with error (sct=0, sc=8) 00:11:26.083 starting I/O failed: -6 00:11:26.083 Write completed with error (sct=0, sc=8) 00:11:26.083 starting I/O failed: -6 00:11:26.083 Write completed with error (sct=0, sc=8) 00:11:26.083 starting I/O failed: -6 00:11:26.083 Read completed with error (sct=0, sc=8) 00:11:26.083 starting I/O failed: -6 00:11:26.083 Read completed with error (sct=0, sc=8) 00:11:26.083 starting I/O failed: -6 00:11:26.083 Read completed with error (sct=0, sc=8) 00:11:26.083 starting I/O failed: -6 00:11:26.083 Read completed with error (sct=0, sc=8) 00:11:26.083 starting I/O failed: -6 00:11:26.083 Read completed with error (sct=0, sc=8) 00:11:26.083 starting I/O failed: -6 00:11:26.083 Read completed with error (sct=0, sc=8) 00:11:26.083 starting I/O failed: -6 00:11:26.083 Read completed with error (sct=0, sc=8) 00:11:26.083 starting I/O failed: -6 00:11:26.083 Write completed with error (sct=0, sc=8) 00:11:26.083 starting I/O failed: -6 00:11:26.083 Write completed with error (sct=0, sc=8) 00:11:26.083 starting I/O failed: -6 00:11:26.083 Read completed with error (sct=0, sc=8) 00:11:26.083 starting I/O failed: -6 00:11:26.083 Read completed with error (sct=0, sc=8) 00:11:26.083 starting I/O failed: -6 00:11:26.083 Read completed with error (sct=0, sc=8) 00:11:26.083 starting I/O failed: -6 00:11:26.083 Read completed with error (sct=0, sc=8) 00:11:26.083 starting I/O failed: -6 00:11:26.083 Write completed with error (sct=0, sc=8) 00:11:26.083 starting I/O failed: -6 00:11:26.083 Write completed with error (sct=0, sc=8) 00:11:26.083 starting I/O failed: -6 00:11:26.083 Read completed with error (sct=0, sc=8) 00:11:26.083 starting I/O failed: -6 00:11:26.083 Write completed with error (sct=0, sc=8) 00:11:26.083 starting I/O failed: -6 00:11:26.083 Write completed with error (sct=0, sc=8) 00:11:26.083 starting I/O failed: -6 00:11:26.083 Read completed with error (sct=0, sc=8) 00:11:26.083 starting I/O failed: -6 00:11:26.083 Read completed with error (sct=0, sc=8) 00:11:26.083 starting I/O failed: -6 00:11:26.083 Read completed with error (sct=0, sc=8) 00:11:26.083 starting I/O failed: -6 00:11:26.083 Read completed with error (sct=0, sc=8) 00:11:26.083 starting I/O failed: -6 00:11:26.083 Write completed with error (sct=0, sc=8) 00:11:26.083 starting I/O failed: -6 00:11:26.083 Read completed with error (sct=0, sc=8) 00:11:26.083 starting I/O failed: -6 00:11:26.083 Read completed with error (sct=0, sc=8) 00:11:26.083 starting I/O failed: -6 00:11:26.083 Read completed with error (sct=0, sc=8) 00:11:26.083 starting I/O failed: -6 00:11:26.083 Write completed with error (sct=0, sc=8) 00:11:26.083 starting I/O failed: -6 00:11:26.083 Read completed with error (sct=0, sc=8) 00:11:26.083 starting I/O failed: -6 00:11:26.083 Read completed with error (sct=0, sc=8) 00:11:26.083 starting I/O failed: -6 00:11:26.083 Read completed with error (sct=0, sc=8) 00:11:26.083 starting I/O failed: -6 00:11:26.083 Write completed with error (sct=0, sc=8) 00:11:26.083 starting I/O failed: -6 00:11:26.083 Write completed with error (sct=0, sc=8) 00:11:26.084 starting I/O failed: -6 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 starting I/O failed: -6 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 starting I/O failed: -6 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 starting I/O failed: -6 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 starting I/O failed: -6 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 starting I/O failed: -6 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 starting I/O failed: -6 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 starting I/O failed: -6 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 starting I/O failed: -6 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 starting I/O failed: -6 00:11:26.084 Write completed with error (sct=0, sc=8) 00:11:26.084 starting I/O failed: -6 00:11:26.084 Write completed with error (sct=0, sc=8) 00:11:26.084 starting I/O failed: -6 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 starting I/O failed: -6 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 starting I/O failed: -6 00:11:26.084 Write completed with error (sct=0, sc=8) 00:11:26.084 Write completed with error (sct=0, sc=8) 00:11:26.084 Write completed with error (sct=0, sc=8) 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 Write completed with error (sct=0, sc=8) 00:11:26.084 Write completed with error (sct=0, sc=8) 00:11:26.084 Write completed with error (sct=0, sc=8) 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 Write completed with error (sct=0, sc=8) 00:11:26.084 Write completed with error (sct=0, sc=8) 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 Write completed with error (sct=0, sc=8) 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 Write completed with error (sct=0, sc=8) 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 Write completed with error (sct=0, sc=8) 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 Write completed with error (sct=0, sc=8) 00:11:26.084 Write completed with error (sct=0, sc=8) 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 Write completed with error (sct=0, sc=8) 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 Write completed with error (sct=0, sc=8) 00:11:26.084 Write completed with error (sct=0, sc=8) 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 Write completed with error (sct=0, sc=8) 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 Write completed with error (sct=0, sc=8) 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 Write completed with error (sct=0, sc=8) 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 starting I/O failed: -6 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 starting I/O failed: -6 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 starting I/O failed: -6 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 starting I/O failed: -6 00:11:26.084 Write completed with error (sct=0, sc=8) 00:11:26.084 starting I/O failed: -6 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 starting I/O failed: -6 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 starting I/O failed: -6 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 starting I/O failed: -6 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 starting I/O failed: -6 00:11:26.084 Write completed with error (sct=0, sc=8) 00:11:26.084 starting I/O failed: -6 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 starting I/O failed: -6 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 starting I/O failed: -6 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 starting I/O failed: -6 00:11:26.084 Write completed with error (sct=0, sc=8) 00:11:26.084 starting I/O failed: -6 00:11:26.084 Write completed with error (sct=0, sc=8) 00:11:26.084 starting I/O failed: -6 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 starting I/O failed: -6 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 starting I/O failed: -6 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 starting I/O failed: -6 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 starting I/O failed: -6 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 starting I/O failed: -6 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 starting I/O failed: -6 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 starting I/O failed: -6 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 starting I/O failed: -6 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 starting I/O failed: -6 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 starting I/O failed: -6 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 starting I/O failed: -6 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 starting I/O failed: -6 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 starting I/O failed: -6 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 starting I/O failed: -6 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 starting I/O failed: -6 00:11:26.084 Write completed with error (sct=0, sc=8) 00:11:26.084 starting I/O failed: -6 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 starting I/O failed: -6 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 starting I/O failed: -6 00:11:26.084 Write completed with error (sct=0, sc=8) 00:11:26.084 starting I/O failed: -6 00:11:26.084 Write completed with error (sct=0, sc=8) 00:11:26.084 starting I/O failed: -6 00:11:26.084 Write completed with error (sct=0, sc=8) 00:11:26.084 starting I/O failed: -6 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 starting I/O failed: -6 00:11:26.084 Write completed with error (sct=0, sc=8) 00:11:26.084 starting I/O failed: -6 00:11:26.084 Write completed with error (sct=0, sc=8) 00:11:26.084 starting I/O failed: -6 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 starting I/O failed: -6 00:11:26.084 Write completed with error (sct=0, sc=8) 00:11:26.084 starting I/O failed: -6 00:11:26.084 Write completed with error (sct=0, sc=8) 00:11:26.084 starting I/O failed: -6 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 starting I/O failed: -6 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 starting I/O failed: -6 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 starting I/O failed: -6 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 starting I/O failed: -6 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 starting I/O failed: -6 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 starting I/O failed: -6 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 Write completed with error (sct=0, sc=8) 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 Write completed with error (sct=0, sc=8) 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 Write completed with error (sct=0, sc=8) 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 Write completed with error (sct=0, sc=8) 00:11:26.084 Write completed with error (sct=0, sc=8) 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 Write completed with error (sct=0, sc=8) 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 Write completed with error (sct=0, sc=8) 00:11:26.084 Write completed with error (sct=0, sc=8) 00:11:26.084 Write completed with error (sct=0, sc=8) 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 Write completed with error (sct=0, sc=8) 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 Write completed with error (sct=0, sc=8) 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 Write completed with error (sct=0, sc=8) 00:11:26.084 Write completed with error (sct=0, sc=8) 00:11:26.084 Write completed with error (sct=0, sc=8) 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 Write completed with error (sct=0, sc=8) 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 Read completed with error (sct=0, sc=8) 00:11:26.084 Write completed with error (sct=0, sc=8) 00:11:26.085 Write completed with error (sct=0, sc=8) 00:11:26.085 Write completed with error (sct=0, sc=8) 00:11:26.085 Read completed with error (sct=0, sc=8) 00:11:26.085 Read completed with error (sct=0, sc=8) 00:11:26.085 Read completed with error (sct=0, sc=8) 00:11:26.085 Read completed with error (sct=0, sc=8) 00:11:26.085 Read completed with error (sct=0, sc=8) 00:11:26.085 Read completed with error (sct=0, sc=8) 00:11:26.085 Write completed with error (sct=0, sc=8) 00:11:26.085 Write completed with error (sct=0, sc=8) 00:11:26.085 Read completed with error (sct=0, sc=8) 00:11:26.085 Read completed with error (sct=0, sc=8) 00:11:26.085 Write completed with error (sct=0, sc=8) 00:11:26.085 Read completed with error (sct=0, sc=8) 00:11:26.085 Read completed with error (sct=0, sc=8) 00:11:26.085 Write completed with error (sct=0, sc=8) 00:11:26.085 Read completed with error (sct=0, sc=8) 00:11:26.085 Read completed with error (sct=0, sc=8) 00:11:26.085 Read completed with error (sct=0, sc=8) 00:11:26.085 Read completed with error (sct=0, sc=8) 00:11:26.085 Read completed with error (sct=0, sc=8) 00:11:26.085 Read completed with error (sct=0, sc=8) 00:11:26.085 Read completed with error (sct=0, sc=8) 00:11:26.085 Read completed with error (sct=0, sc=8) 00:11:26.085 Write completed with error (sct=0, sc=8) 00:11:26.085 Read completed with error (sct=0, sc=8) 00:11:26.085 Read completed with error (sct=0, sc=8) 00:11:26.085 Write completed with error (sct=0, sc=8) 00:11:26.085 Read completed with error (sct=0, sc=8) 00:11:26.085 Write completed with error (sct=0, sc=8) 00:11:26.085 Write completed with error (sct=0, sc=8) 00:11:26.085 Write completed with error (sct=0, sc=8) 00:11:26.085 Read completed with error (sct=0, sc=8) 00:11:26.085 Read completed with error (sct=0, sc=8) 00:11:26.085 Read completed with error (sct=0, sc=8) 00:11:26.085 Read completed with error (sct=0, sc=8) 00:11:26.085 Read completed with error (sct=0, sc=8) 00:11:26.085 Write completed with error (sct=0, sc=8) 00:11:26.085 Write completed with error (sct=0, sc=8) 00:11:26.085 Read completed with error (sct=0, sc=8) 00:11:26.085 Read completed with error (sct=0, sc=8) 00:11:26.085 Read completed with error (sct=0, sc=8) 00:11:26.085 Read completed with error (sct=0, sc=8) 00:11:26.085 Read completed with error (sct=0, sc=8) 00:11:26.085 Write completed with error (sct=0, sc=8) 00:11:26.085 Read completed with error (sct=0, sc=8) 00:11:26.085 Write completed with error (sct=0, sc=8) 00:11:26.085 Read completed with error (sct=0, sc=8) 00:11:26.085 Write completed with error (sct=0, sc=8) 00:11:26.085 Read completed with error (sct=0, sc=8) 00:11:26.085 Read completed with error (sct=0, sc=8) 00:11:26.085 Read completed with error (sct=0, sc=8) 00:11:26.085 Read completed with error (sct=0, sc=8) 00:11:26.085 Read completed with error (sct=0, sc=8) 00:11:26.085 Write completed with error (sct=0, sc=8) 00:11:26.085 Read completed with error (sct=0, sc=8) 00:11:26.085 Read completed with error (sct=0, sc=8) 00:11:26.085 Write completed with error (sct=0, sc=8) 00:11:26.085 Write completed with error (sct=0, sc=8) 00:11:26.085 Read completed with error (sct=0, sc=8) 00:11:26.085 Read completed with error (sct=0, sc=8) 00:11:26.085 Write completed with error (sct=0, sc=8) 00:11:26.085 Initializing NVMe Controllers 00:11:26.085 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:11:26.085 Controller IO queue size 128, less than required. 00:11:26.085 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:26.085 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:26.085 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:26.085 Initialization complete. Launching workers. 00:11:26.085 ======================================================== 00:11:26.085 Latency(us) 00:11:26.085 Device Information : IOPS MiB/s Average min max 00:11:26.085 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 80.39 0.04 1595143.37 1000097.65 2980196.35 00:11:26.085 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 80.39 0.04 1596547.32 1001549.96 2981102.84 00:11:26.085 ======================================================== 00:11:26.085 Total : 160.79 0.08 1595845.34 1000097.65 2981102.84 00:11:26.085 00:11:26.085 14:45:59 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:11:26.085 14:45:59 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2764990 00:11:26.085 14:45:59 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:11:26.085 [2024-07-15 14:45:59.862396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:11:26.085 [2024-07-15 14:45:59.862431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:11:26.085 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:11:26.651 14:46:00 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:11:26.651 14:46:00 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2764990 00:11:26.651 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2764990) - No such process 00:11:26.651 14:46:00 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2764990 00:11:26.651 14:46:00 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:11:26.651 14:46:00 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 2764990 00:11:26.651 14:46:00 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:11:26.651 14:46:00 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:26.651 14:46:00 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:11:26.651 14:46:00 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:26.651 14:46:00 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 2764990 00:11:26.651 14:46:00 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:11:26.651 14:46:00 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:26.651 14:46:00 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:26.651 14:46:00 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:26.651 14:46:00 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:26.651 14:46:00 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.651 14:46:00 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:26.651 14:46:00 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.651 14:46:00 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:26.651 14:46:00 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.651 14:46:00 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:26.651 [2024-07-15 14:46:00.381785] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:26.651 14:46:00 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.651 14:46:00 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:26.651 14:46:00 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.651 14:46:00 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:26.651 14:46:00 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.651 14:46:00 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2765887 00:11:26.651 14:46:00 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:11:26.651 14:46:00 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:26.651 14:46:00 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2765887 00:11:26.651 14:46:00 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:26.651 EAL: No free 2048 kB hugepages reported on node 1 00:11:26.651 [2024-07-15 14:46:00.459397] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:27.217 14:46:00 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:27.217 14:46:00 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2765887 00:11:27.217 14:46:00 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:27.779 14:46:01 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:27.779 14:46:01 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2765887 00:11:27.779 14:46:01 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:28.036 14:46:01 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:28.036 14:46:01 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2765887 00:11:28.036 14:46:01 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:28.597 14:46:02 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:28.597 14:46:02 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2765887 00:11:28.597 14:46:02 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:29.159 14:46:02 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:29.159 14:46:02 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2765887 00:11:29.159 14:46:02 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:29.721 14:46:03 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:29.721 14:46:03 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2765887 00:11:29.721 14:46:03 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:30.284 14:46:03 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:30.284 14:46:03 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2765887 00:11:30.284 14:46:03 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:30.541 14:46:04 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:30.541 14:46:04 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2765887 00:11:30.541 14:46:04 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:31.106 14:46:04 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:31.106 14:46:04 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2765887 00:11:31.106 14:46:04 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:31.670 14:46:05 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:31.670 14:46:05 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2765887 00:11:31.670 14:46:05 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:32.234 14:46:05 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:32.234 14:46:05 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2765887 00:11:32.234 14:46:05 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:32.799 14:46:06 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:32.799 14:46:06 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2765887 00:11:32.799 14:46:06 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:33.056 14:46:06 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:33.056 14:46:06 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2765887 00:11:33.056 14:46:06 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:33.620 14:46:07 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:33.621 14:46:07 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2765887 00:11:33.621 14:46:07 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:33.878 Initializing NVMe Controllers 00:11:33.878 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:11:33.878 Controller IO queue size 128, less than required. 00:11:33.878 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:33.878 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:33.878 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:33.878 Initialization complete. Launching workers. 00:11:33.878 ======================================================== 00:11:33.878 Latency(us) 00:11:33.878 Device Information : IOPS MiB/s Average min max 00:11:33.878 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001268.87 1000051.55 1003934.45 00:11:33.878 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002538.81 1000160.55 1005901.01 00:11:33.878 ======================================================== 00:11:33.878 Total : 256.00 0.12 1001903.84 1000051.55 1005901.01 00:11:33.878 00:11:34.136 14:46:07 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:34.136 14:46:07 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2765887 00:11:34.136 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2765887) - No such process 00:11:34.136 14:46:07 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2765887 00:11:34.136 14:46:07 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:11:34.136 14:46:07 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:11:34.136 14:46:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:34.136 14:46:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:11:34.136 14:46:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:11:34.136 14:46:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:11:34.136 14:46:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:11:34.136 14:46:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:34.136 14:46:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:11:34.136 rmmod nvme_rdma 00:11:34.136 rmmod nvme_fabrics 00:11:34.136 14:46:07 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:34.136 14:46:08 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:11:34.136 14:46:08 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:11:34.136 14:46:08 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 2764942 ']' 00:11:34.136 14:46:08 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 2764942 00:11:34.136 14:46:08 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 2764942 ']' 00:11:34.136 14:46:08 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 2764942 00:11:34.136 14:46:08 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:11:34.136 14:46:08 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:34.136 14:46:08 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2764942 00:11:34.136 14:46:08 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:34.136 14:46:08 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:34.136 14:46:08 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2764942' 00:11:34.136 killing process with pid 2764942 00:11:34.136 14:46:08 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 2764942 00:11:34.136 14:46:08 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 2764942 00:11:34.393 14:46:08 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:34.393 14:46:08 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:11:34.393 00:11:34.393 real 0m19.208s 00:11:34.393 user 0m49.636s 00:11:34.393 sys 0m5.207s 00:11:34.393 14:46:08 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:34.393 14:46:08 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:34.393 ************************************ 00:11:34.393 END TEST nvmf_delete_subsystem 00:11:34.393 ************************************ 00:11:34.652 14:46:08 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:11:34.652 14:46:08 nvmf_rdma -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=rdma 00:11:34.652 14:46:08 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:34.652 14:46:08 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:34.652 14:46:08 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:11:34.652 ************************************ 00:11:34.652 START TEST nvmf_ns_masking 00:11:34.652 ************************************ 00:11:34.652 14:46:08 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=rdma 00:11:34.652 * Looking for test storage... 00:11:34.652 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:34.652 14:46:08 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:34.652 14:46:08 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:11:34.652 14:46:08 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:34.652 14:46:08 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:34.652 14:46:08 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:34.652 14:46:08 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:34.652 14:46:08 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:34.652 14:46:08 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:34.652 14:46:08 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:34.652 14:46:08 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:34.652 14:46:08 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:34.652 14:46:08 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:34.652 14:46:08 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:11:34.652 14:46:08 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:11:34.652 14:46:08 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:34.652 14:46:08 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:34.652 14:46:08 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:34.652 14:46:08 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:34.652 14:46:08 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:34.652 14:46:08 nvmf_rdma.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:34.652 14:46:08 nvmf_rdma.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:34.652 14:46:08 nvmf_rdma.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:34.652 14:46:08 nvmf_rdma.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.652 14:46:08 nvmf_rdma.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.652 14:46:08 nvmf_rdma.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.652 14:46:08 nvmf_rdma.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:11:34.652 14:46:08 nvmf_rdma.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.652 14:46:08 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:11:34.652 14:46:08 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:34.652 14:46:08 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:34.652 14:46:08 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:34.652 14:46:08 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:34.652 14:46:08 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:34.652 14:46:08 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:34.652 14:46:08 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:34.652 14:46:08 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:34.652 14:46:08 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:34.652 14:46:08 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:11:34.652 14:46:08 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:11:34.652 14:46:08 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:11:34.652 14:46:08 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=91ad46ce-0379-40b0-91df-73fa0e1fcf32 00:11:34.652 14:46:08 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:11:34.652 14:46:08 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=ea6a3e0e-9d48-4de6-b026-eb3e1482d2c9 00:11:34.652 14:46:08 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:11:34.652 14:46:08 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:11:34.652 14:46:08 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:11:34.652 14:46:08 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:11:34.652 14:46:08 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=1e5d0244-c2c4-4f93-a60a-1de0f3e222b8 00:11:34.652 14:46:08 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:11:34.652 14:46:08 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:11:34.652 14:46:08 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:34.652 14:46:08 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:34.652 14:46:08 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:34.652 14:46:08 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:34.652 14:46:08 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:34.652 14:46:08 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:34.652 14:46:08 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:34.652 14:46:08 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:34.652 14:46:08 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:34.652 14:46:08 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:11:34.652 14:46:08 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:39.920 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:39.920 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:11:39.920 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:39.920 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:39.920 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:39.920 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:39.920 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:39.920 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:11:39.920 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:39.920 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:11:39.920 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:11:39.920 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:11:39.920 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:11:39.920 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:11:39.920 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:11:39.920 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:39.920 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:39.920 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:39.920 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:39.920 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:39.920 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:39.920 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:39.920 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:39.920 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:39.920 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:39.920 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:39.920 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:39.920 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:11:39.920 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:11:39.920 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:11:39.920 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:11:39.920 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:11:39.920 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:39.920 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:39.920 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:11:39.920 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:11:39.920 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:39.920 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:39.920 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:39.920 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:39.920 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:39.920 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:39.920 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:39.920 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:11:39.920 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:11:39.920 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:39.920 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:39.920 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:11:39.921 Found net devices under 0000:da:00.0: mlx_0_0 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:11:39.921 Found net devices under 0000:da:00.1: mlx_0_1 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@420 -- # rdma_device_init 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@58 -- # uname 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@62 -- # modprobe ib_cm 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@63 -- # modprobe ib_core 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@64 -- # modprobe ib_umad 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@66 -- # modprobe iw_cm 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@502 -- # allocate_nic_ips 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@73 -- # get_rdma_if_list 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@105 -- # continue 2 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@105 -- # continue 2 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:11:39.921 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:39.921 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:11:39.921 altname enp218s0f0np0 00:11:39.921 altname ens818f0np0 00:11:39.921 inet 192.168.100.8/24 scope global mlx_0_0 00:11:39.921 valid_lft forever preferred_lft forever 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:11:39.921 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:39.921 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:11:39.921 altname enp218s0f1np1 00:11:39.921 altname ens818f1np1 00:11:39.921 inet 192.168.100.9/24 scope global mlx_0_1 00:11:39.921 valid_lft forever preferred_lft forever 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@86 -- # get_rdma_if_list 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@105 -- # continue 2 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@105 -- # continue 2 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:11:39.921 192.168.100.9' 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:11:39.921 192.168.100.9' 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@457 -- # head -n 1 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@458 -- # tail -n +2 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:11:39.921 192.168.100.9' 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@458 -- # head -n 1 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:11:39.921 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:39.922 14:46:13 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:39.922 14:46:13 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:39.922 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=2770137 00:11:39.922 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:11:39.922 14:46:13 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 2770137 00:11:39.922 14:46:13 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 2770137 ']' 00:11:39.922 14:46:13 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:39.922 14:46:13 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:39.922 14:46:13 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:39.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:39.922 14:46:13 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:39.922 14:46:13 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:39.922 [2024-07-15 14:46:13.826396] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:11:39.922 [2024-07-15 14:46:13.826451] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:40.180 EAL: No free 2048 kB hugepages reported on node 1 00:11:40.180 [2024-07-15 14:46:13.884531] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:40.180 [2024-07-15 14:46:13.960420] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:40.180 [2024-07-15 14:46:13.960459] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:40.180 [2024-07-15 14:46:13.960465] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:40.180 [2024-07-15 14:46:13.960471] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:40.180 [2024-07-15 14:46:13.960475] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:40.180 [2024-07-15 14:46:13.960517] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:40.744 14:46:14 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:40.744 14:46:14 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:11:40.744 14:46:14 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:40.744 14:46:14 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:40.744 14:46:14 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:40.744 14:46:14 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:40.744 14:46:14 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:41.001 [2024-07-15 14:46:14.831633] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x18ec910/0x18f0e00) succeed. 00:11:41.001 [2024-07-15 14:46:14.840248] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x18ede10/0x1932490) succeed. 00:11:41.001 14:46:14 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:11:41.001 14:46:14 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:11:41.001 14:46:14 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:41.259 Malloc1 00:11:41.259 14:46:15 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:41.517 Malloc2 00:11:41.517 14:46:15 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:41.776 14:46:15 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:11:41.776 14:46:15 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:42.034 [2024-07-15 14:46:15.783238] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:42.034 14:46:15 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:11:42.034 14:46:15 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 1e5d0244-c2c4-4f93-a60a-1de0f3e222b8 -a 192.168.100.8 -s 4420 -i 4 00:11:42.292 14:46:16 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:11:42.292 14:46:16 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:11:42.292 14:46:16 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:42.292 14:46:16 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:42.292 14:46:16 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:11:44.189 14:46:18 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:44.189 14:46:18 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:44.189 14:46:18 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:44.447 14:46:18 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:44.447 14:46:18 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:44.447 14:46:18 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:11:44.447 14:46:18 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:44.447 14:46:18 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:44.447 14:46:18 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:44.447 14:46:18 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:44.447 14:46:18 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:11:44.447 14:46:18 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:44.447 14:46:18 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:44.447 [ 0]:0x1 00:11:44.447 14:46:18 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:44.447 14:46:18 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:44.447 14:46:18 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ca981fe340af48b8b167a85ff837fe53 00:11:44.447 14:46:18 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ca981fe340af48b8b167a85ff837fe53 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:44.447 14:46:18 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:11:44.705 14:46:18 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:11:44.705 14:46:18 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:44.705 14:46:18 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:44.705 [ 0]:0x1 00:11:44.705 14:46:18 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:44.705 14:46:18 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:44.705 14:46:18 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ca981fe340af48b8b167a85ff837fe53 00:11:44.705 14:46:18 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ca981fe340af48b8b167a85ff837fe53 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:44.705 14:46:18 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:11:44.705 14:46:18 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:44.705 14:46:18 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:44.705 [ 1]:0x2 00:11:44.705 14:46:18 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:44.705 14:46:18 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:44.705 14:46:18 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=daa3e70f779d40aa8b327659d07b655c 00:11:44.705 14:46:18 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ daa3e70f779d40aa8b327659d07b655c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:44.705 14:46:18 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:11:44.705 14:46:18 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:44.961 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:44.961 14:46:18 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:45.218 14:46:18 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:11:45.218 14:46:19 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:11:45.218 14:46:19 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 1e5d0244-c2c4-4f93-a60a-1de0f3e222b8 -a 192.168.100.8 -s 4420 -i 4 00:11:45.782 14:46:19 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:11:45.782 14:46:19 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:11:45.782 14:46:19 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:45.782 14:46:19 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:11:45.782 14:46:19 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:11:45.782 14:46:19 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:11:47.678 14:46:21 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:47.678 14:46:21 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:47.678 14:46:21 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:47.678 14:46:21 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:47.678 14:46:21 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:47.678 14:46:21 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:11:47.678 14:46:21 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:47.678 14:46:21 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:47.678 14:46:21 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:47.678 14:46:21 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:47.678 14:46:21 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:11:47.678 14:46:21 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:47.678 14:46:21 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:47.678 14:46:21 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:47.678 14:46:21 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:47.678 14:46:21 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:47.678 14:46:21 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:47.678 14:46:21 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:47.678 14:46:21 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:47.678 14:46:21 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:47.678 14:46:21 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:47.678 14:46:21 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:47.678 14:46:21 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:47.678 14:46:21 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:47.678 14:46:21 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:47.678 14:46:21 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:47.678 14:46:21 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:47.678 14:46:21 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:47.678 14:46:21 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:11:47.678 14:46:21 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:47.678 14:46:21 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:47.678 [ 0]:0x2 00:11:47.678 14:46:21 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:47.678 14:46:21 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:47.678 14:46:21 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=daa3e70f779d40aa8b327659d07b655c 00:11:47.678 14:46:21 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ daa3e70f779d40aa8b327659d07b655c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:47.678 14:46:21 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:47.935 14:46:21 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:11:47.935 14:46:21 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:47.935 14:46:21 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:47.935 [ 0]:0x1 00:11:47.935 14:46:21 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:47.935 14:46:21 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:47.935 14:46:21 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ca981fe340af48b8b167a85ff837fe53 00:11:47.935 14:46:21 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ca981fe340af48b8b167a85ff837fe53 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:47.935 14:46:21 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:11:47.935 14:46:21 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:47.935 14:46:21 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:47.935 [ 1]:0x2 00:11:47.935 14:46:21 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:47.935 14:46:21 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:48.192 14:46:21 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=daa3e70f779d40aa8b327659d07b655c 00:11:48.192 14:46:21 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ daa3e70f779d40aa8b327659d07b655c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:48.192 14:46:21 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:48.192 14:46:22 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:11:48.192 14:46:22 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:48.192 14:46:22 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:48.192 14:46:22 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:48.192 14:46:22 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:48.192 14:46:22 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:48.192 14:46:22 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:48.192 14:46:22 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:48.192 14:46:22 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:48.192 14:46:22 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:48.192 14:46:22 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:48.192 14:46:22 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:48.192 14:46:22 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:48.192 14:46:22 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:48.192 14:46:22 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:48.193 14:46:22 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:48.193 14:46:22 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:48.193 14:46:22 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:48.193 14:46:22 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:11:48.193 14:46:22 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:48.193 14:46:22 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:48.193 [ 0]:0x2 00:11:48.193 14:46:22 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:48.193 14:46:22 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:48.449 14:46:22 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=daa3e70f779d40aa8b327659d07b655c 00:11:48.449 14:46:22 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ daa3e70f779d40aa8b327659d07b655c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:48.449 14:46:22 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:11:48.449 14:46:22 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:48.705 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:48.705 14:46:22 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:48.962 14:46:22 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:11:48.962 14:46:22 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 1e5d0244-c2c4-4f93-a60a-1de0f3e222b8 -a 192.168.100.8 -s 4420 -i 4 00:11:49.218 14:46:22 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:49.218 14:46:22 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:11:49.218 14:46:22 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:49.218 14:46:22 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:11:49.218 14:46:22 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:11:49.218 14:46:22 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:11:51.119 14:46:24 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:51.119 14:46:24 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:51.119 14:46:24 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:51.119 14:46:24 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:11:51.119 14:46:24 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:51.119 14:46:24 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:11:51.119 14:46:24 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:51.119 14:46:24 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:51.119 14:46:24 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:51.119 14:46:24 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:51.119 14:46:24 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:11:51.119 14:46:24 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:51.119 14:46:24 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:51.119 [ 0]:0x1 00:11:51.119 14:46:24 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:51.119 14:46:24 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:51.119 14:46:25 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ca981fe340af48b8b167a85ff837fe53 00:11:51.119 14:46:25 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ca981fe340af48b8b167a85ff837fe53 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:51.119 14:46:25 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:11:51.119 14:46:25 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:51.119 14:46:25 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:51.119 [ 1]:0x2 00:11:51.119 14:46:25 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:51.119 14:46:25 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:51.377 14:46:25 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=daa3e70f779d40aa8b327659d07b655c 00:11:51.377 14:46:25 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ daa3e70f779d40aa8b327659d07b655c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:51.377 14:46:25 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:51.377 14:46:25 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:11:51.377 14:46:25 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:51.377 14:46:25 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:51.377 14:46:25 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:51.377 14:46:25 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:51.377 14:46:25 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:51.377 14:46:25 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:51.377 14:46:25 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:51.377 14:46:25 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:51.377 14:46:25 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:51.377 14:46:25 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:51.377 14:46:25 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:51.377 14:46:25 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:51.377 14:46:25 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:51.377 14:46:25 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:51.377 14:46:25 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:51.377 14:46:25 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:51.377 14:46:25 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:51.377 14:46:25 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:11:51.377 14:46:25 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:51.377 14:46:25 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:51.377 [ 0]:0x2 00:11:51.377 14:46:25 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:51.377 14:46:25 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:51.635 14:46:25 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=daa3e70f779d40aa8b327659d07b655c 00:11:51.635 14:46:25 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ daa3e70f779d40aa8b327659d07b655c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:51.635 14:46:25 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:51.635 14:46:25 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:51.636 14:46:25 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:51.636 14:46:25 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:51.636 14:46:25 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:51.636 14:46:25 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:51.636 14:46:25 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:51.636 14:46:25 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:51.636 14:46:25 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:51.636 14:46:25 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:51.636 14:46:25 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:11:51.636 14:46:25 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:51.636 [2024-07-15 14:46:25.485756] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:11:51.636 request: 00:11:51.636 { 00:11:51.636 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:51.636 "nsid": 2, 00:11:51.636 "host": "nqn.2016-06.io.spdk:host1", 00:11:51.636 "method": "nvmf_ns_remove_host", 00:11:51.636 "req_id": 1 00:11:51.636 } 00:11:51.636 Got JSON-RPC error response 00:11:51.636 response: 00:11:51.636 { 00:11:51.636 "code": -32602, 00:11:51.636 "message": "Invalid parameters" 00:11:51.636 } 00:11:51.636 14:46:25 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:51.636 14:46:25 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:51.636 14:46:25 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:51.636 14:46:25 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:51.636 14:46:25 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:11:51.636 14:46:25 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:51.636 14:46:25 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:51.636 14:46:25 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:51.636 14:46:25 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:51.636 14:46:25 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:51.636 14:46:25 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:51.636 14:46:25 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:51.636 14:46:25 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:51.636 14:46:25 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:51.636 14:46:25 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:51.636 14:46:25 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:51.893 14:46:25 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:51.893 14:46:25 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:51.893 14:46:25 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:51.893 14:46:25 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:51.893 14:46:25 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:51.893 14:46:25 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:51.893 14:46:25 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:11:51.893 14:46:25 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:51.893 14:46:25 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:51.893 [ 0]:0x2 00:11:51.893 14:46:25 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:51.893 14:46:25 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:51.893 14:46:25 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=daa3e70f779d40aa8b327659d07b655c 00:11:51.893 14:46:25 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ daa3e70f779d40aa8b327659d07b655c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:51.893 14:46:25 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:11:51.893 14:46:25 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:52.151 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:52.151 14:46:25 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2772356 00:11:52.151 14:46:25 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:11:52.151 14:46:25 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:11:52.151 14:46:25 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2772356 /var/tmp/host.sock 00:11:52.151 14:46:25 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 2772356 ']' 00:11:52.151 14:46:25 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:11:52.151 14:46:25 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:52.151 14:46:25 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:11:52.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:11:52.151 14:46:25 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:52.151 14:46:25 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:52.151 [2024-07-15 14:46:25.962228] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:11:52.151 [2024-07-15 14:46:25.962273] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2772356 ] 00:11:52.151 EAL: No free 2048 kB hugepages reported on node 1 00:11:52.151 [2024-07-15 14:46:26.015741] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:52.408 [2024-07-15 14:46:26.089237] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:52.973 14:46:26 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:52.973 14:46:26 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:11:52.973 14:46:26 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:53.229 14:46:26 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:53.230 14:46:27 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 91ad46ce-0379-40b0-91df-73fa0e1fcf32 00:11:53.230 14:46:27 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:11:53.230 14:46:27 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 91AD46CE037940B091DF73FA0E1FCF32 -i 00:11:53.488 14:46:27 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid ea6a3e0e-9d48-4de6-b026-eb3e1482d2c9 00:11:53.488 14:46:27 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:11:53.488 14:46:27 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g EA6A3E0E9D484DE6B026EB3E1482D2C9 -i 00:11:53.765 14:46:27 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:53.765 14:46:27 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:11:54.064 14:46:27 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:11:54.064 14:46:27 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:11:54.337 nvme0n1 00:11:54.337 14:46:28 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:11:54.337 14:46:28 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:11:54.337 nvme1n2 00:11:54.337 14:46:28 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:11:54.337 14:46:28 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:11:54.337 14:46:28 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:11:54.337 14:46:28 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:11:54.337 14:46:28 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:11:54.615 14:46:28 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:11:54.615 14:46:28 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:11:54.615 14:46:28 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:11:54.615 14:46:28 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:11:54.885 14:46:28 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 91ad46ce-0379-40b0-91df-73fa0e1fcf32 == \9\1\a\d\4\6\c\e\-\0\3\7\9\-\4\0\b\0\-\9\1\d\f\-\7\3\f\a\0\e\1\f\c\f\3\2 ]] 00:11:54.885 14:46:28 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:11:54.885 14:46:28 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:11:54.885 14:46:28 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:11:54.885 14:46:28 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ ea6a3e0e-9d48-4de6-b026-eb3e1482d2c9 == \e\a\6\a\3\e\0\e\-\9\d\4\8\-\4\d\e\6\-\b\0\2\6\-\e\b\3\e\1\4\8\2\d\2\c\9 ]] 00:11:54.885 14:46:28 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 2772356 00:11:54.885 14:46:28 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 2772356 ']' 00:11:54.885 14:46:28 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 2772356 00:11:54.885 14:46:28 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:11:54.885 14:46:28 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:54.885 14:46:28 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2772356 00:11:55.143 14:46:28 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:55.143 14:46:28 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:55.143 14:46:28 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2772356' 00:11:55.143 killing process with pid 2772356 00:11:55.143 14:46:28 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 2772356 00:11:55.143 14:46:28 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 2772356 00:11:55.401 14:46:29 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:55.659 14:46:29 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:11:55.659 14:46:29 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:11:55.659 14:46:29 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:55.659 14:46:29 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:11:55.659 14:46:29 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:11:55.659 14:46:29 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:11:55.659 14:46:29 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:11:55.659 14:46:29 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:55.659 14:46:29 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:11:55.659 rmmod nvme_rdma 00:11:55.659 rmmod nvme_fabrics 00:11:55.659 14:46:29 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:55.659 14:46:29 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:11:55.659 14:46:29 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:11:55.659 14:46:29 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 2770137 ']' 00:11:55.659 14:46:29 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 2770137 00:11:55.659 14:46:29 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 2770137 ']' 00:11:55.659 14:46:29 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 2770137 00:11:55.659 14:46:29 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:11:55.659 14:46:29 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:55.659 14:46:29 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2770137 00:11:55.659 14:46:29 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:55.659 14:46:29 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:55.659 14:46:29 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2770137' 00:11:55.659 killing process with pid 2770137 00:11:55.659 14:46:29 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 2770137 00:11:55.659 14:46:29 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 2770137 00:11:55.917 14:46:29 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:55.917 14:46:29 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:11:55.917 00:11:55.917 real 0m21.336s 00:11:55.917 user 0m25.108s 00:11:55.917 sys 0m5.818s 00:11:55.917 14:46:29 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:55.917 14:46:29 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:55.917 ************************************ 00:11:55.917 END TEST nvmf_ns_masking 00:11:55.917 ************************************ 00:11:55.917 14:46:29 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:11:55.917 14:46:29 nvmf_rdma -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:11:55.917 14:46:29 nvmf_rdma -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:11:55.917 14:46:29 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:55.917 14:46:29 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:55.917 14:46:29 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:11:55.917 ************************************ 00:11:55.917 START TEST nvmf_nvme_cli 00:11:55.917 ************************************ 00:11:55.917 14:46:29 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:11:55.917 * Looking for test storage... 00:11:55.917 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:55.917 14:46:29 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:55.917 14:46:29 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:11:56.175 14:46:29 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:56.175 14:46:29 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:56.175 14:46:29 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:56.175 14:46:29 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:56.175 14:46:29 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:56.175 14:46:29 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:56.175 14:46:29 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:56.175 14:46:29 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:56.175 14:46:29 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:56.175 14:46:29 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:56.175 14:46:29 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:11:56.175 14:46:29 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:11:56.175 14:46:29 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:56.175 14:46:29 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:56.175 14:46:29 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:56.175 14:46:29 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:56.175 14:46:29 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:56.175 14:46:29 nvmf_rdma.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:56.175 14:46:29 nvmf_rdma.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:56.175 14:46:29 nvmf_rdma.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:56.175 14:46:29 nvmf_rdma.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.175 14:46:29 nvmf_rdma.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.175 14:46:29 nvmf_rdma.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.175 14:46:29 nvmf_rdma.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:11:56.175 14:46:29 nvmf_rdma.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.175 14:46:29 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:11:56.176 14:46:29 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:56.176 14:46:29 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:56.176 14:46:29 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:56.176 14:46:29 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:56.176 14:46:29 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:56.176 14:46:29 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:56.176 14:46:29 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:56.176 14:46:29 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:56.176 14:46:29 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:56.176 14:46:29 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:56.176 14:46:29 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:11:56.176 14:46:29 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:11:56.176 14:46:29 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:11:56.176 14:46:29 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:56.176 14:46:29 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:56.176 14:46:29 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:56.176 14:46:29 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:56.176 14:46:29 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:56.176 14:46:29 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:56.176 14:46:29 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:56.176 14:46:29 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:56.176 14:46:29 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:56.176 14:46:29 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:11:56.176 14:46:29 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:01.437 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:01.437 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:12:01.437 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:01.437 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:01.437 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:01.437 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:01.437 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:01.437 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:12:01.437 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:01.437 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:12:01.437 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:12:01.437 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:12:01.437 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:12:01.437 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:12:01.437 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:12:01.437 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:01.437 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:01.437 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:01.437 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:01.437 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:01.437 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:01.437 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:12:01.438 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:12:01.438 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:12:01.438 Found net devices under 0000:da:00.0: mlx_0_0 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:12:01.438 Found net devices under 0000:da:00.1: mlx_0_1 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@420 -- # rdma_device_init 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@58 -- # uname 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@62 -- # modprobe ib_cm 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@63 -- # modprobe ib_core 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@64 -- # modprobe ib_umad 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@66 -- # modprobe iw_cm 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@502 -- # allocate_nic_ips 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@73 -- # get_rdma_if_list 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@105 -- # continue 2 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@105 -- # continue 2 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:12:01.438 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:01.438 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:12:01.438 altname enp218s0f0np0 00:12:01.438 altname ens818f0np0 00:12:01.438 inet 192.168.100.8/24 scope global mlx_0_0 00:12:01.438 valid_lft forever preferred_lft forever 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:12:01.438 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:01.438 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:12:01.438 altname enp218s0f1np1 00:12:01.438 altname ens818f1np1 00:12:01.438 inet 192.168.100.9/24 scope global mlx_0_1 00:12:01.438 valid_lft forever preferred_lft forever 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@86 -- # get_rdma_if_list 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@105 -- # continue 2 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@105 -- # continue 2 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:01.438 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:01.439 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:01.439 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:01.439 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:01.439 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:12:01.439 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:01.439 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:01.439 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:01.439 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:01.439 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:12:01.439 192.168.100.9' 00:12:01.439 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:12:01.439 192.168.100.9' 00:12:01.439 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@457 -- # head -n 1 00:12:01.439 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:01.439 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:12:01.439 192.168.100.9' 00:12:01.439 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@458 -- # tail -n +2 00:12:01.439 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@458 -- # head -n 1 00:12:01.439 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:01.439 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:12:01.439 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:01.439 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:12:01.439 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:12:01.439 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:12:01.439 14:46:35 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:12:01.439 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:01.439 14:46:35 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:01.439 14:46:35 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:01.439 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=2776136 00:12:01.439 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 2776136 00:12:01.439 14:46:35 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 2776136 ']' 00:12:01.439 14:46:35 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:01.439 14:46:35 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:01.439 14:46:35 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:01.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:01.439 14:46:35 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:01.439 14:46:35 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:01.439 14:46:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:01.439 [2024-07-15 14:46:35.287250] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:12:01.439 [2024-07-15 14:46:35.287293] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:01.439 EAL: No free 2048 kB hugepages reported on node 1 00:12:01.439 [2024-07-15 14:46:35.342752] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:01.696 [2024-07-15 14:46:35.429095] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:01.696 [2024-07-15 14:46:35.429131] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:01.696 [2024-07-15 14:46:35.429138] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:01.696 [2024-07-15 14:46:35.429144] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:01.696 [2024-07-15 14:46:35.429149] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:01.696 [2024-07-15 14:46:35.429214] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:01.696 [2024-07-15 14:46:35.429310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:01.696 [2024-07-15 14:46:35.429395] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:01.696 [2024-07-15 14:46:35.429396] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:02.259 14:46:36 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:02.259 14:46:36 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:12:02.259 14:46:36 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:02.259 14:46:36 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:02.259 14:46:36 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:02.259 14:46:36 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:02.259 14:46:36 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:02.259 14:46:36 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.259 14:46:36 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:02.259 [2024-07-15 14:46:36.150606] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x84ecc0/0x8531b0) succeed. 00:12:02.259 [2024-07-15 14:46:36.159713] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x850300/0x894840) succeed. 00:12:02.516 14:46:36 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.516 14:46:36 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:02.516 14:46:36 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.516 14:46:36 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:02.516 Malloc0 00:12:02.516 14:46:36 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.516 14:46:36 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:02.516 14:46:36 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.516 14:46:36 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:02.516 Malloc1 00:12:02.516 14:46:36 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.516 14:46:36 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:12:02.516 14:46:36 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.516 14:46:36 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:02.516 14:46:36 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.516 14:46:36 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:02.516 14:46:36 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.516 14:46:36 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:02.516 14:46:36 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.516 14:46:36 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:02.516 14:46:36 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.516 14:46:36 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:02.516 14:46:36 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.516 14:46:36 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:02.516 14:46:36 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.516 14:46:36 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:02.516 [2024-07-15 14:46:36.356619] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:02.516 14:46:36 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.516 14:46:36 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:12:02.516 14:46:36 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.516 14:46:36 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:02.516 14:46:36 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.516 14:46:36 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 4420 00:12:02.772 00:12:02.772 Discovery Log Number of Records 2, Generation counter 2 00:12:02.772 =====Discovery Log Entry 0====== 00:12:02.772 trtype: rdma 00:12:02.772 adrfam: ipv4 00:12:02.772 subtype: current discovery subsystem 00:12:02.772 treq: not required 00:12:02.772 portid: 0 00:12:02.772 trsvcid: 4420 00:12:02.772 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:02.772 traddr: 192.168.100.8 00:12:02.772 eflags: explicit discovery connections, duplicate discovery information 00:12:02.772 rdma_prtype: not specified 00:12:02.772 rdma_qptype: connected 00:12:02.772 rdma_cms: rdma-cm 00:12:02.772 rdma_pkey: 0x0000 00:12:02.772 =====Discovery Log Entry 1====== 00:12:02.772 trtype: rdma 00:12:02.772 adrfam: ipv4 00:12:02.772 subtype: nvme subsystem 00:12:02.772 treq: not required 00:12:02.772 portid: 0 00:12:02.772 trsvcid: 4420 00:12:02.772 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:02.772 traddr: 192.168.100.8 00:12:02.772 eflags: none 00:12:02.772 rdma_prtype: not specified 00:12:02.772 rdma_qptype: connected 00:12:02.772 rdma_cms: rdma-cm 00:12:02.772 rdma_pkey: 0x0000 00:12:02.772 14:46:36 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:12:02.772 14:46:36 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:12:02.772 14:46:36 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:02.772 14:46:36 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:02.772 14:46:36 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:02.772 14:46:36 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:02.772 14:46:36 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:02.772 14:46:36 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:02.772 14:46:36 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:02.772 14:46:36 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:12:02.772 14:46:36 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:03.702 14:46:37 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:03.702 14:46:37 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:12:03.702 14:46:37 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:03.702 14:46:37 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:12:03.702 14:46:37 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:12:03.702 14:46:37 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:12:05.599 14:46:39 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:05.599 14:46:39 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:05.599 14:46:39 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:05.599 14:46:39 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:12:05.599 14:46:39 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:05.599 14:46:39 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:12:05.599 14:46:39 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:12:05.599 14:46:39 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:05.599 14:46:39 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:05.599 14:46:39 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:05.599 14:46:39 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:05.599 14:46:39 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:05.599 14:46:39 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:05.599 14:46:39 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:05.599 14:46:39 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:05.599 14:46:39 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:12:05.599 14:46:39 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:05.599 14:46:39 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:05.599 14:46:39 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:12:05.599 14:46:39 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:05.599 14:46:39 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:12:05.599 /dev/nvme0n1 ]] 00:12:05.599 14:46:39 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:12:05.599 14:46:39 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:12:05.599 14:46:39 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:05.599 14:46:39 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:05.599 14:46:39 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:05.599 14:46:39 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:05.599 14:46:39 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:05.599 14:46:39 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:05.599 14:46:39 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:05.599 14:46:39 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:05.599 14:46:39 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:12:05.599 14:46:39 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:05.599 14:46:39 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:05.599 14:46:39 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:12:05.599 14:46:39 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:05.599 14:46:39 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:12:05.599 14:46:39 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:06.970 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:06.970 14:46:40 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:06.970 14:46:40 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:12:06.970 14:46:40 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:06.970 14:46:40 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:06.970 14:46:40 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:06.970 14:46:40 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:06.970 14:46:40 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:12:06.970 14:46:40 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:12:06.970 14:46:40 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:06.970 14:46:40 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.970 14:46:40 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:06.970 14:46:40 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.970 14:46:40 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:12:06.970 14:46:40 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:12:06.970 14:46:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:06.970 14:46:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:12:06.970 14:46:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:12:06.970 14:46:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:12:06.970 14:46:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:12:06.970 14:46:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:06.970 14:46:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:12:06.970 rmmod nvme_rdma 00:12:06.970 rmmod nvme_fabrics 00:12:06.970 14:46:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:06.970 14:46:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:12:06.970 14:46:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:12:06.970 14:46:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 2776136 ']' 00:12:06.970 14:46:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 2776136 00:12:06.970 14:46:40 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 2776136 ']' 00:12:06.970 14:46:40 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 2776136 00:12:06.970 14:46:40 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:12:06.970 14:46:40 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:06.970 14:46:40 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2776136 00:12:06.970 14:46:40 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:06.970 14:46:40 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:06.970 14:46:40 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2776136' 00:12:06.970 killing process with pid 2776136 00:12:06.970 14:46:40 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 2776136 00:12:06.970 14:46:40 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 2776136 00:12:07.229 14:46:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:07.229 14:46:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:12:07.229 00:12:07.229 real 0m11.181s 00:12:07.229 user 0m23.310s 00:12:07.229 sys 0m4.633s 00:12:07.229 14:46:40 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:07.229 14:46:40 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:07.229 ************************************ 00:12:07.229 END TEST nvmf_nvme_cli 00:12:07.229 ************************************ 00:12:07.229 14:46:40 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:12:07.229 14:46:40 nvmf_rdma -- nvmf/nvmf.sh@40 -- # [[ 0 -eq 1 ]] 00:12:07.229 14:46:40 nvmf_rdma -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:12:07.229 14:46:40 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:07.229 14:46:40 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:07.229 14:46:40 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:12:07.229 ************************************ 00:12:07.229 START TEST nvmf_host_management 00:12:07.229 ************************************ 00:12:07.229 14:46:40 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:12:07.229 * Looking for test storage... 00:12:07.229 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:07.229 14:46:41 nvmf_rdma.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:07.229 14:46:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:12:07.229 14:46:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:07.229 14:46:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:07.229 14:46:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:07.229 14:46:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:07.229 14:46:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:07.229 14:46:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:07.229 14:46:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:07.229 14:46:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:07.229 14:46:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:07.229 14:46:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:07.229 14:46:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:12:07.229 14:46:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:12:07.229 14:46:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:07.229 14:46:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:07.229 14:46:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:07.229 14:46:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:07.229 14:46:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:07.229 14:46:41 nvmf_rdma.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:07.229 14:46:41 nvmf_rdma.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:07.229 14:46:41 nvmf_rdma.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:07.229 14:46:41 nvmf_rdma.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.229 14:46:41 nvmf_rdma.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.229 14:46:41 nvmf_rdma.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.229 14:46:41 nvmf_rdma.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:12:07.229 14:46:41 nvmf_rdma.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.229 14:46:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:12:07.229 14:46:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:07.229 14:46:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:07.229 14:46:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:07.229 14:46:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:07.229 14:46:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:07.229 14:46:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:07.229 14:46:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:07.229 14:46:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:07.229 14:46:41 nvmf_rdma.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:07.229 14:46:41 nvmf_rdma.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:07.229 14:46:41 nvmf_rdma.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:12:07.229 14:46:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:12:07.229 14:46:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:07.229 14:46:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:07.229 14:46:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:07.229 14:46:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:07.229 14:46:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:07.229 14:46:41 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:07.229 14:46:41 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:07.229 14:46:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:07.229 14:46:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:07.229 14:46:41 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:12:07.229 14:46:41 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:12.514 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:12.514 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:12:12.514 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:12.514 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:12.514 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:12.514 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:12.514 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:12.514 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:12:12.514 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:12.514 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:12:12.514 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:12:12.514 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:12:12.514 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:12:12.514 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:12:12.514 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:12:12.514 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:12.514 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:12.514 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:12.514 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:12:12.515 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:12:12.515 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:12:12.515 Found net devices under 0000:da:00.0: mlx_0_0 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:12:12.515 Found net devices under 0000:da:00.1: mlx_0_1 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@420 -- # rdma_device_init 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@58 -- # uname 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@62 -- # modprobe ib_cm 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@63 -- # modprobe ib_core 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@64 -- # modprobe ib_umad 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@66 -- # modprobe iw_cm 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@502 -- # allocate_nic_ips 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@73 -- # get_rdma_if_list 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@105 -- # continue 2 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@105 -- # continue 2 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:12:12.515 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:12.515 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:12:12.515 altname enp218s0f0np0 00:12:12.515 altname ens818f0np0 00:12:12.515 inet 192.168.100.8/24 scope global mlx_0_0 00:12:12.515 valid_lft forever preferred_lft forever 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:12:12.515 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:12.515 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:12:12.515 altname enp218s0f1np1 00:12:12.515 altname ens818f1np1 00:12:12.515 inet 192.168.100.9/24 scope global mlx_0_1 00:12:12.515 valid_lft forever preferred_lft forever 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@86 -- # get_rdma_if_list 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@105 -- # continue 2 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@105 -- # continue 2 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:12:12.515 192.168.100.9' 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:12:12.515 192.168.100.9' 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@457 -- # head -n 1 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:12:12.515 192.168.100.9' 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@458 -- # tail -n +2 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@458 -- # head -n 1 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:12:12.515 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:12:12.516 14:46:45 nvmf_rdma.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:12:12.516 14:46:45 nvmf_rdma.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:12:12.516 14:46:45 nvmf_rdma.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:12:12.516 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:12.516 14:46:45 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:12.516 14:46:45 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:12.516 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=2779948 00:12:12.516 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 2779948 00:12:12.516 14:46:45 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 2779948 ']' 00:12:12.516 14:46:45 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:12.516 14:46:45 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:12.516 14:46:45 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:12.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:12.516 14:46:45 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:12.516 14:46:45 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:12.516 14:46:45 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:12:12.516 [2024-07-15 14:46:45.692880] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:12:12.516 [2024-07-15 14:46:45.692923] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:12.516 EAL: No free 2048 kB hugepages reported on node 1 00:12:12.516 [2024-07-15 14:46:45.750138] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:12.516 [2024-07-15 14:46:45.829085] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:12.516 [2024-07-15 14:46:45.829119] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:12.516 [2024-07-15 14:46:45.829126] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:12.516 [2024-07-15 14:46:45.829132] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:12.516 [2024-07-15 14:46:45.829137] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:12.516 [2024-07-15 14:46:45.829190] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:12.516 [2024-07-15 14:46:45.829277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:12.516 [2024-07-15 14:46:45.829303] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:12:12.516 [2024-07-15 14:46:45.829305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:12.774 14:46:46 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:12.774 14:46:46 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:12:12.774 14:46:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:12.774 14:46:46 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:12.774 14:46:46 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:12.774 14:46:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:12.774 14:46:46 nvmf_rdma.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:12.774 14:46:46 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.774 14:46:46 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:12.774 [2024-07-15 14:46:46.556077] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xe3be10/0xe40300) succeed. 00:12:12.774 [2024-07-15 14:46:46.565188] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xe3d400/0xe81990) succeed. 00:12:12.774 14:46:46 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.774 14:46:46 nvmf_rdma.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:12:12.774 14:46:46 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:12.774 14:46:46 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:12.774 14:46:46 nvmf_rdma.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:12:12.774 14:46:46 nvmf_rdma.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:12:13.033 14:46:46 nvmf_rdma.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:12:13.033 14:46:46 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.033 14:46:46 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:13.033 Malloc0 00:12:13.033 [2024-07-15 14:46:46.738700] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:13.033 14:46:46 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.033 14:46:46 nvmf_rdma.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:12:13.033 14:46:46 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:13.033 14:46:46 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:13.033 14:46:46 nvmf_rdma.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2780214 00:12:13.033 14:46:46 nvmf_rdma.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2780214 /var/tmp/bdevperf.sock 00:12:13.033 14:46:46 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 2780214 ']' 00:12:13.033 14:46:46 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:13.033 14:46:46 nvmf_rdma.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:12:13.033 14:46:46 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:13.033 14:46:46 nvmf_rdma.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:12:13.033 14:46:46 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:13.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:13.033 14:46:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:12:13.033 14:46:46 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:13.033 14:46:46 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:13.033 14:46:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:12:13.033 14:46:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:13.033 14:46:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:13.033 { 00:12:13.033 "params": { 00:12:13.033 "name": "Nvme$subsystem", 00:12:13.033 "trtype": "$TEST_TRANSPORT", 00:12:13.033 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:13.033 "adrfam": "ipv4", 00:12:13.033 "trsvcid": "$NVMF_PORT", 00:12:13.033 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:13.033 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:13.033 "hdgst": ${hdgst:-false}, 00:12:13.033 "ddgst": ${ddgst:-false} 00:12:13.033 }, 00:12:13.033 "method": "bdev_nvme_attach_controller" 00:12:13.033 } 00:12:13.033 EOF 00:12:13.033 )") 00:12:13.033 14:46:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:12:13.033 14:46:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:12:13.033 14:46:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:12:13.033 14:46:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:13.033 "params": { 00:12:13.033 "name": "Nvme0", 00:12:13.033 "trtype": "rdma", 00:12:13.033 "traddr": "192.168.100.8", 00:12:13.033 "adrfam": "ipv4", 00:12:13.033 "trsvcid": "4420", 00:12:13.033 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:13.033 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:12:13.033 "hdgst": false, 00:12:13.033 "ddgst": false 00:12:13.033 }, 00:12:13.033 "method": "bdev_nvme_attach_controller" 00:12:13.033 }' 00:12:13.033 [2024-07-15 14:46:46.829654] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:12:13.033 [2024-07-15 14:46:46.829701] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2780214 ] 00:12:13.033 EAL: No free 2048 kB hugepages reported on node 1 00:12:13.033 [2024-07-15 14:46:46.884434] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:13.291 [2024-07-15 14:46:46.958764] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:13.291 Running I/O for 10 seconds... 00:12:13.857 14:46:47 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:13.857 14:46:47 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:12:13.857 14:46:47 nvmf_rdma.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:12:13.857 14:46:47 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.857 14:46:47 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:13.857 14:46:47 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.857 14:46:47 nvmf_rdma.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:13.857 14:46:47 nvmf_rdma.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:12:13.857 14:46:47 nvmf_rdma.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:12:13.857 14:46:47 nvmf_rdma.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:12:13.857 14:46:47 nvmf_rdma.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:12:13.857 14:46:47 nvmf_rdma.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:12:13.857 14:46:47 nvmf_rdma.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:12:13.857 14:46:47 nvmf_rdma.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:12:13.857 14:46:47 nvmf_rdma.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:12:13.857 14:46:47 nvmf_rdma.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:12:13.857 14:46:47 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.857 14:46:47 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:13.857 14:46:47 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.857 14:46:47 nvmf_rdma.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=1644 00:12:13.857 14:46:47 nvmf_rdma.nvmf_host_management -- target/host_management.sh@58 -- # '[' 1644 -ge 100 ']' 00:12:13.857 14:46:47 nvmf_rdma.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:12:13.857 14:46:47 nvmf_rdma.nvmf_host_management -- target/host_management.sh@60 -- # break 00:12:13.857 14:46:47 nvmf_rdma.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:12:13.857 14:46:47 nvmf_rdma.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:12:13.857 14:46:47 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.857 14:46:47 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:13.857 14:46:47 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.857 14:46:47 nvmf_rdma.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:12:13.857 14:46:47 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.857 14:46:47 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:13.857 14:46:47 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.857 14:46:47 nvmf_rdma.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:12:15.231 [2024-07-15 14:46:48.743518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:98304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138da900 len:0x10000 key:0x182500 00:12:15.231 [2024-07-15 14:46:48.743555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9fc08000 sqhd:52b0 p:0 m:0 dnr:0 00:12:15.231 [2024-07-15 14:46:48.743572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:98432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138ca880 len:0x10000 key:0x182500 00:12:15.231 [2024-07-15 14:46:48.743579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9fc08000 sqhd:52b0 p:0 m:0 dnr:0 00:12:15.231 [2024-07-15 14:46:48.743589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:98560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138ba800 len:0x10000 key:0x182500 00:12:15.231 [2024-07-15 14:46:48.743596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9fc08000 sqhd:52b0 p:0 m:0 dnr:0 00:12:15.231 [2024-07-15 14:46:48.743604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:98688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138aa780 len:0x10000 key:0x182500 00:12:15.231 [2024-07-15 14:46:48.743611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9fc08000 sqhd:52b0 p:0 m:0 dnr:0 00:12:15.231 [2024-07-15 14:46:48.743620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:98816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001389a700 len:0x10000 key:0x182500 00:12:15.231 [2024-07-15 14:46:48.743626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9fc08000 sqhd:52b0 p:0 m:0 dnr:0 00:12:15.231 [2024-07-15 14:46:48.743634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:98944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001388a680 len:0x10000 key:0x182500 00:12:15.231 [2024-07-15 14:46:48.743640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9fc08000 sqhd:52b0 p:0 m:0 dnr:0 00:12:15.231 [2024-07-15 14:46:48.743649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:99072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001387a600 len:0x10000 key:0x182500 00:12:15.231 [2024-07-15 14:46:48.743655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9fc08000 sqhd:52b0 p:0 m:0 dnr:0 00:12:15.232 [2024-07-15 14:46:48.743664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:99200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001386a580 len:0x10000 key:0x182500 00:12:15.232 [2024-07-15 14:46:48.743671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9fc08000 sqhd:52b0 p:0 m:0 dnr:0 00:12:15.232 [2024-07-15 14:46:48.743679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:99328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001385a500 len:0x10000 key:0x182500 00:12:15.232 [2024-07-15 14:46:48.743685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9fc08000 sqhd:52b0 p:0 m:0 dnr:0 00:12:15.232 [2024-07-15 14:46:48.743698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:99456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001384a480 len:0x10000 key:0x182500 00:12:15.232 [2024-07-15 14:46:48.743705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9fc08000 sqhd:52b0 p:0 m:0 dnr:0 00:12:15.232 [2024-07-15 14:46:48.743713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:99584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001383a400 len:0x10000 key:0x182500 00:12:15.232 [2024-07-15 14:46:48.743720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9fc08000 sqhd:52b0 p:0 m:0 dnr:0 00:12:15.232 [2024-07-15 14:46:48.743728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:99712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001382a380 len:0x10000 key:0x182500 00:12:15.232 [2024-07-15 14:46:48.743734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9fc08000 sqhd:52b0 p:0 m:0 dnr:0 00:12:15.232 [2024-07-15 14:46:48.743742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:99840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001381a300 len:0x10000 key:0x182500 00:12:15.232 [2024-07-15 14:46:48.743749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9fc08000 sqhd:52b0 p:0 m:0 dnr:0 00:12:15.232 [2024-07-15 14:46:48.743757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:99968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001380a280 len:0x10000 key:0x182500 00:12:15.232 [2024-07-15 14:46:48.743764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9fc08000 sqhd:52b0 p:0 m:0 dnr:0 00:12:15.232 [2024-07-15 14:46:48.743773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192d1e80 len:0x10000 key:0x182800 00:12:15.232 [2024-07-15 14:46:48.743779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9fc08000 sqhd:52b0 p:0 m:0 dnr:0 00:12:15.232 [2024-07-15 14:46:48.743788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:100224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192c1e00 len:0x10000 key:0x182800 00:12:15.232 [2024-07-15 14:46:48.743795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9fc08000 sqhd:52b0 p:0 m:0 dnr:0 00:12:15.232 [2024-07-15 14:46:48.743803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:100352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192b1d80 len:0x10000 key:0x182800 00:12:15.232 [2024-07-15 14:46:48.743810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9fc08000 sqhd:52b0 p:0 m:0 dnr:0 00:12:15.232 [2024-07-15 14:46:48.743818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:100480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192a1d00 len:0x10000 key:0x182800 00:12:15.232 [2024-07-15 14:46:48.743824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9fc08000 sqhd:52b0 p:0 m:0 dnr:0 00:12:15.232 [2024-07-15 14:46:48.743833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:100608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019291c80 len:0x10000 key:0x182800 00:12:15.232 [2024-07-15 14:46:48.743839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9fc08000 sqhd:52b0 p:0 m:0 dnr:0 00:12:15.232 [2024-07-15 14:46:48.743848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:100736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019281c00 len:0x10000 key:0x182800 00:12:15.232 [2024-07-15 14:46:48.743854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9fc08000 sqhd:52b0 p:0 m:0 dnr:0 00:12:15.232 [2024-07-15 14:46:48.743864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:100864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019271b80 len:0x10000 key:0x182800 00:12:15.232 [2024-07-15 14:46:48.743871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9fc08000 sqhd:52b0 p:0 m:0 dnr:0 00:12:15.232 [2024-07-15 14:46:48.743880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:100992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019261b00 len:0x10000 key:0x182800 00:12:15.232 [2024-07-15 14:46:48.743887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9fc08000 sqhd:52b0 p:0 m:0 dnr:0 00:12:15.232 [2024-07-15 14:46:48.743895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:101120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019251a80 len:0x10000 key:0x182800 00:12:15.232 [2024-07-15 14:46:48.743902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9fc08000 sqhd:52b0 p:0 m:0 dnr:0 00:12:15.232 [2024-07-15 14:46:48.743910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:101248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019241a00 len:0x10000 key:0x182800 00:12:15.232 [2024-07-15 14:46:48.743917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9fc08000 sqhd:52b0 p:0 m:0 dnr:0 00:12:15.232 [2024-07-15 14:46:48.743925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:101376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019231980 len:0x10000 key:0x182800 00:12:15.232 [2024-07-15 14:46:48.743931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9fc08000 sqhd:52b0 p:0 m:0 dnr:0 00:12:15.232 [2024-07-15 14:46:48.743939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:101504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019221900 len:0x10000 key:0x182800 00:12:15.232 [2024-07-15 14:46:48.743945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9fc08000 sqhd:52b0 p:0 m:0 dnr:0 00:12:15.232 [2024-07-15 14:46:48.743953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:101632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019211880 len:0x10000 key:0x182800 00:12:15.232 [2024-07-15 14:46:48.743959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9fc08000 sqhd:52b0 p:0 m:0 dnr:0 00:12:15.232 [2024-07-15 14:46:48.743967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:101760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019201800 len:0x10000 key:0x182800 00:12:15.232 [2024-07-15 14:46:48.743974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9fc08000 sqhd:52b0 p:0 m:0 dnr:0 00:12:15.232 [2024-07-15 14:46:48.743982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:101888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190eff80 len:0x10000 key:0x182700 00:12:15.232 [2024-07-15 14:46:48.743988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9fc08000 sqhd:52b0 p:0 m:0 dnr:0 00:12:15.232 [2024-07-15 14:46:48.743996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:102016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190dff00 len:0x10000 key:0x182700 00:12:15.232 [2024-07-15 14:46:48.744003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9fc08000 sqhd:52b0 p:0 m:0 dnr:0 00:12:15.232 [2024-07-15 14:46:48.744011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:102144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190cfe80 len:0x10000 key:0x182700 00:12:15.232 [2024-07-15 14:46:48.744018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9fc08000 sqhd:52b0 p:0 m:0 dnr:0 00:12:15.232 [2024-07-15 14:46:48.744026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:102272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190bfe00 len:0x10000 key:0x182700 00:12:15.232 [2024-07-15 14:46:48.744034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9fc08000 sqhd:52b0 p:0 m:0 dnr:0 00:12:15.232 [2024-07-15 14:46:48.744042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:102400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190afd80 len:0x10000 key:0x182700 00:12:15.232 [2024-07-15 14:46:48.744048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9fc08000 sqhd:52b0 p:0 m:0 dnr:0 00:12:15.232 [2024-07-15 14:46:48.744057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:102528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001909fd00 len:0x10000 key:0x182700 00:12:15.232 [2024-07-15 14:46:48.744063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9fc08000 sqhd:52b0 p:0 m:0 dnr:0 00:12:15.232 [2024-07-15 14:46:48.744071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:102656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001908fc80 len:0x10000 key:0x182700 00:12:15.232 [2024-07-15 14:46:48.744077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9fc08000 sqhd:52b0 p:0 m:0 dnr:0 00:12:15.232 [2024-07-15 14:46:48.744085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:102784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001907fc00 len:0x10000 key:0x182700 00:12:15.232 [2024-07-15 14:46:48.744093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9fc08000 sqhd:52b0 p:0 m:0 dnr:0 00:12:15.232 [2024-07-15 14:46:48.744101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:102912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001906fb80 len:0x10000 key:0x182700 00:12:15.232 [2024-07-15 14:46:48.744107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9fc08000 sqhd:52b0 p:0 m:0 dnr:0 00:12:15.232 [2024-07-15 14:46:48.744115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:103040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001905fb00 len:0x10000 key:0x182700 00:12:15.232 [2024-07-15 14:46:48.744122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9fc08000 sqhd:52b0 p:0 m:0 dnr:0 00:12:15.232 [2024-07-15 14:46:48.744130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:103168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001904fa80 len:0x10000 key:0x182700 00:12:15.232 [2024-07-15 14:46:48.744136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9fc08000 sqhd:52b0 p:0 m:0 dnr:0 00:12:15.232 [2024-07-15 14:46:48.744144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:103296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001903fa00 len:0x10000 key:0x182700 00:12:15.232 [2024-07-15 14:46:48.744151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9fc08000 sqhd:52b0 p:0 m:0 dnr:0 00:12:15.232 [2024-07-15 14:46:48.744158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:103424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001902f980 len:0x10000 key:0x182700 00:12:15.232 [2024-07-15 14:46:48.744165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9fc08000 sqhd:52b0 p:0 m:0 dnr:0 00:12:15.232 [2024-07-15 14:46:48.744173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:95360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e11e000 len:0x10000 key:0x182400 00:12:15.232 [2024-07-15 14:46:48.744180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9fc08000 sqhd:52b0 p:0 m:0 dnr:0 00:12:15.232 [2024-07-15 14:46:48.744188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:95488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e13f000 len:0x10000 key:0x182400 00:12:15.232 [2024-07-15 14:46:48.744195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9fc08000 sqhd:52b0 p:0 m:0 dnr:0 00:12:15.232 [2024-07-15 14:46:48.744203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:95616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000daee000 len:0x10000 key:0x182400 00:12:15.232 [2024-07-15 14:46:48.744210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9fc08000 sqhd:52b0 p:0 m:0 dnr:0 00:12:15.232 [2024-07-15 14:46:48.744217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:95744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000db0f000 len:0x10000 key:0x182400 00:12:15.232 [2024-07-15 14:46:48.744224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9fc08000 sqhd:52b0 p:0 m:0 dnr:0 00:12:15.232 [2024-07-15 14:46:48.744232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:95872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d4be000 len:0x10000 key:0x182400 00:12:15.233 [2024-07-15 14:46:48.744238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9fc08000 sqhd:52b0 p:0 m:0 dnr:0 00:12:15.233 [2024-07-15 14:46:48.744246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:96000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d4df000 len:0x10000 key:0x182400 00:12:15.233 [2024-07-15 14:46:48.744253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9fc08000 sqhd:52b0 p:0 m:0 dnr:0 00:12:15.233 [2024-07-15 14:46:48.744261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:96128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ce8e000 len:0x10000 key:0x182400 00:12:15.233 [2024-07-15 14:46:48.744268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9fc08000 sqhd:52b0 p:0 m:0 dnr:0 00:12:15.233 [2024-07-15 14:46:48.744275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:96256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ceaf000 len:0x10000 key:0x182400 00:12:15.233 [2024-07-15 14:46:48.744282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9fc08000 sqhd:52b0 p:0 m:0 dnr:0 00:12:15.233 [2024-07-15 14:46:48.744290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:96384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c85e000 len:0x10000 key:0x182400 00:12:15.233 [2024-07-15 14:46:48.744296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9fc08000 sqhd:52b0 p:0 m:0 dnr:0 00:12:15.233 [2024-07-15 14:46:48.744304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:96512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e55f000 len:0x10000 key:0x182400 00:12:15.233 [2024-07-15 14:46:48.744311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9fc08000 sqhd:52b0 p:0 m:0 dnr:0 00:12:15.233 [2024-07-15 14:46:48.744319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:96640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e53e000 len:0x10000 key:0x182400 00:12:15.233 [2024-07-15 14:46:48.744325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9fc08000 sqhd:52b0 p:0 m:0 dnr:0 00:12:15.233 [2024-07-15 14:46:48.744333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:96768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e51d000 len:0x10000 key:0x182400 00:12:15.233 [2024-07-15 14:46:48.744339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9fc08000 sqhd:52b0 p:0 m:0 dnr:0 00:12:15.233 [2024-07-15 14:46:48.744347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:96896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e4fc000 len:0x10000 key:0x182400 00:12:15.233 [2024-07-15 14:46:48.744354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9fc08000 sqhd:52b0 p:0 m:0 dnr:0 00:12:15.233 [2024-07-15 14:46:48.744363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:97024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e4db000 len:0x10000 key:0x182400 00:12:15.233 [2024-07-15 14:46:48.744369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9fc08000 sqhd:52b0 p:0 m:0 dnr:0 00:12:15.233 [2024-07-15 14:46:48.744377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:97152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e4ba000 len:0x10000 key:0x182400 00:12:15.233 [2024-07-15 14:46:48.744383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9fc08000 sqhd:52b0 p:0 m:0 dnr:0 00:12:15.233 [2024-07-15 14:46:48.744393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:97280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e499000 len:0x10000 key:0x182400 00:12:15.233 [2024-07-15 14:46:48.744401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9fc08000 sqhd:52b0 p:0 m:0 dnr:0 00:12:15.233 [2024-07-15 14:46:48.744409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:97408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e436000 len:0x10000 key:0x182400 00:12:15.233 [2024-07-15 14:46:48.744416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9fc08000 sqhd:52b0 p:0 m:0 dnr:0 00:12:15.233 [2024-07-15 14:46:48.744424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:97536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e415000 len:0x10000 key:0x182400 00:12:15.233 [2024-07-15 14:46:48.744430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9fc08000 sqhd:52b0 p:0 m:0 dnr:0 00:12:15.233 [2024-07-15 14:46:48.744438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:97664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e3f4000 len:0x10000 key:0x182400 00:12:15.233 [2024-07-15 14:46:48.744445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9fc08000 sqhd:52b0 p:0 m:0 dnr:0 00:12:15.233 [2024-07-15 14:46:48.744452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:97792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e3d3000 len:0x10000 key:0x182400 00:12:15.233 [2024-07-15 14:46:48.744459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9fc08000 sqhd:52b0 p:0 m:0 dnr:0 00:12:15.233 [2024-07-15 14:46:48.744467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:97920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e3b2000 len:0x10000 key:0x182400 00:12:15.233 [2024-07-15 14:46:48.744474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9fc08000 sqhd:52b0 p:0 m:0 dnr:0 00:12:15.233 [2024-07-15 14:46:48.744482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:98048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e391000 len:0x10000 key:0x182400 00:12:15.233 [2024-07-15 14:46:48.744488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9fc08000 sqhd:52b0 p:0 m:0 dnr:0 00:12:15.233 [2024-07-15 14:46:48.744497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:98176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e370000 len:0x10000 key:0x182400 00:12:15.233 [2024-07-15 14:46:48.744503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:9fc08000 sqhd:52b0 p:0 m:0 dnr:0 00:12:15.233 [2024-07-15 14:46:48.746359] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019201580 was disconnected and freed. reset controller. 00:12:15.233 [2024-07-15 14:46:48.747260] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:12:15.233 task offset: 98304 on job bdev=Nvme0n1 fails 00:12:15.233 00:12:15.233 Latency(us) 00:12:15.233 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:15.233 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:12:15.233 Job: Nvme0n1 ended in about 1.61 seconds with error 00:12:15.233 Verification LBA range: start 0x0 length 0x400 00:12:15.233 Nvme0n1 : 1.61 1096.47 68.53 39.67 0.00 55827.89 2246.95 1014622.11 00:12:15.233 =================================================================================================================== 00:12:15.233 Total : 1096.47 68.53 39.67 0.00 55827.89 2246.95 1014622.11 00:12:15.233 [2024-07-15 14:46:48.748852] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:15.233 14:46:48 nvmf_rdma.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2780214 00:12:15.233 14:46:48 nvmf_rdma.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:12:15.233 14:46:48 nvmf_rdma.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:12:15.233 14:46:48 nvmf_rdma.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:12:15.233 14:46:48 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:12:15.233 14:46:48 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:12:15.233 14:46:48 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:15.233 14:46:48 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:15.233 { 00:12:15.233 "params": { 00:12:15.233 "name": "Nvme$subsystem", 00:12:15.233 "trtype": "$TEST_TRANSPORT", 00:12:15.233 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:15.233 "adrfam": "ipv4", 00:12:15.233 "trsvcid": "$NVMF_PORT", 00:12:15.233 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:15.233 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:15.233 "hdgst": ${hdgst:-false}, 00:12:15.233 "ddgst": ${ddgst:-false} 00:12:15.233 }, 00:12:15.233 "method": "bdev_nvme_attach_controller" 00:12:15.233 } 00:12:15.233 EOF 00:12:15.233 )") 00:12:15.233 14:46:48 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:12:15.233 14:46:48 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:12:15.233 14:46:48 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:12:15.233 14:46:48 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:15.233 "params": { 00:12:15.233 "name": "Nvme0", 00:12:15.233 "trtype": "rdma", 00:12:15.233 "traddr": "192.168.100.8", 00:12:15.233 "adrfam": "ipv4", 00:12:15.233 "trsvcid": "4420", 00:12:15.233 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:15.233 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:12:15.233 "hdgst": false, 00:12:15.233 "ddgst": false 00:12:15.233 }, 00:12:15.233 "method": "bdev_nvme_attach_controller" 00:12:15.233 }' 00:12:15.233 [2024-07-15 14:46:48.802997] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:12:15.233 [2024-07-15 14:46:48.803040] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2780516 ] 00:12:15.233 EAL: No free 2048 kB hugepages reported on node 1 00:12:15.233 [2024-07-15 14:46:48.857164] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:15.233 [2024-07-15 14:46:48.930997] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.233 Running I/O for 1 seconds... 00:12:16.608 00:12:16.608 Latency(us) 00:12:16.608 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:16.608 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:12:16.608 Verification LBA range: start 0x0 length 0x400 00:12:16.608 Nvme0n1 : 1.01 3018.74 188.67 0.00 0.00 20764.42 889.42 43191.34 00:12:16.608 =================================================================================================================== 00:12:16.608 Total : 3018.74 188.67 0.00 0.00 20764.42 889.42 43191.34 00:12:16.608 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 68: 2780214 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "0") -q 64 -o 65536 -w verify -t 10 "${NO_HUGE[@]}" 00:12:16.608 14:46:50 nvmf_rdma.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:12:16.608 14:46:50 nvmf_rdma.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:12:16.608 14:46:50 nvmf_rdma.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:12:16.608 14:46:50 nvmf_rdma.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:12:16.608 14:46:50 nvmf_rdma.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:12:16.608 14:46:50 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:16.608 14:46:50 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:12:16.608 14:46:50 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:12:16.608 14:46:50 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:12:16.608 14:46:50 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:12:16.608 14:46:50 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:16.608 14:46:50 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:12:16.608 rmmod nvme_rdma 00:12:16.608 rmmod nvme_fabrics 00:12:16.608 14:46:50 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:16.608 14:46:50 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:12:16.608 14:46:50 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:12:16.608 14:46:50 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 2779948 ']' 00:12:16.608 14:46:50 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 2779948 00:12:16.608 14:46:50 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 2779948 ']' 00:12:16.608 14:46:50 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 2779948 00:12:16.608 14:46:50 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:12:16.608 14:46:50 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:16.608 14:46:50 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2779948 00:12:16.608 14:46:50 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:16.608 14:46:50 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:16.608 14:46:50 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2779948' 00:12:16.608 killing process with pid 2779948 00:12:16.608 14:46:50 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 2779948 00:12:16.608 14:46:50 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 2779948 00:12:16.867 [2024-07-15 14:46:50.689735] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:12:16.867 14:46:50 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:16.867 14:46:50 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:12:16.867 14:46:50 nvmf_rdma.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:12:16.867 00:12:16.867 real 0m9.711s 00:12:16.867 user 0m24.056s 00:12:16.867 sys 0m4.161s 00:12:16.867 14:46:50 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:16.867 14:46:50 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:16.867 ************************************ 00:12:16.867 END TEST nvmf_host_management 00:12:16.867 ************************************ 00:12:16.867 14:46:50 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:12:16.867 14:46:50 nvmf_rdma -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:12:16.867 14:46:50 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:16.867 14:46:50 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:16.867 14:46:50 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:12:16.867 ************************************ 00:12:16.867 START TEST nvmf_lvol 00:12:16.867 ************************************ 00:12:16.867 14:46:50 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:12:17.127 * Looking for test storage... 00:12:17.127 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:17.127 14:46:50 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:17.127 14:46:50 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:12:17.127 14:46:50 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:17.127 14:46:50 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:17.127 14:46:50 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:17.127 14:46:50 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:17.127 14:46:50 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:17.127 14:46:50 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:17.127 14:46:50 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:17.127 14:46:50 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:17.127 14:46:50 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:17.127 14:46:50 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:17.127 14:46:50 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:12:17.127 14:46:50 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:12:17.127 14:46:50 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:17.127 14:46:50 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:17.127 14:46:50 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:17.127 14:46:50 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:17.127 14:46:50 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:17.127 14:46:50 nvmf_rdma.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:17.127 14:46:50 nvmf_rdma.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:17.127 14:46:50 nvmf_rdma.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:17.127 14:46:50 nvmf_rdma.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.127 14:46:50 nvmf_rdma.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.127 14:46:50 nvmf_rdma.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.127 14:46:50 nvmf_rdma.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:12:17.127 14:46:50 nvmf_rdma.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.127 14:46:50 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:12:17.127 14:46:50 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:17.127 14:46:50 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:17.127 14:46:50 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:17.127 14:46:50 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:17.127 14:46:50 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:17.127 14:46:50 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:17.127 14:46:50 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:17.127 14:46:50 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:17.127 14:46:50 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:17.127 14:46:50 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:17.127 14:46:50 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:12:17.127 14:46:50 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:12:17.127 14:46:50 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:17.127 14:46:50 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:12:17.127 14:46:50 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:12:17.127 14:46:50 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:17.127 14:46:50 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:17.127 14:46:50 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:17.127 14:46:50 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:17.127 14:46:50 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:17.127 14:46:50 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:17.127 14:46:50 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:17.127 14:46:50 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:17.127 14:46:50 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:17.127 14:46:50 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:12:17.127 14:46:50 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:22.428 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:22.428 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:12:22.428 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:12:22.429 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:12:22.429 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:12:22.429 Found net devices under 0000:da:00.0: mlx_0_0 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:12:22.429 Found net devices under 0000:da:00.1: mlx_0_1 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@420 -- # rdma_device_init 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@58 -- # uname 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@62 -- # modprobe ib_cm 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@63 -- # modprobe ib_core 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@64 -- # modprobe ib_umad 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@66 -- # modprobe iw_cm 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@502 -- # allocate_nic_ips 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@73 -- # get_rdma_if_list 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@105 -- # continue 2 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@105 -- # continue 2 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:12:22.429 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:22.429 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:12:22.429 altname enp218s0f0np0 00:12:22.429 altname ens818f0np0 00:12:22.429 inet 192.168.100.8/24 scope global mlx_0_0 00:12:22.429 valid_lft forever preferred_lft forever 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:12:22.429 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:12:22.429 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:22.430 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:12:22.430 altname enp218s0f1np1 00:12:22.430 altname ens818f1np1 00:12:22.430 inet 192.168.100.9/24 scope global mlx_0_1 00:12:22.430 valid_lft forever preferred_lft forever 00:12:22.430 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:12:22.430 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:22.430 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:22.430 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:12:22.430 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:12:22.430 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@86 -- # get_rdma_if_list 00:12:22.430 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:22.430 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:22.430 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:22.430 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:22.430 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:22.430 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:22.430 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:22.430 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:22.430 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:22.430 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@105 -- # continue 2 00:12:22.430 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:22.430 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:22.430 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:22.430 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:22.430 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:22.430 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:22.430 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@105 -- # continue 2 00:12:22.430 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:22.430 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:12:22.430 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:22.430 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:22.430 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:22.430 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:22.430 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:22.430 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:12:22.430 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:22.430 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:22.430 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:22.430 14:46:55 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:22.430 14:46:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:12:22.430 192.168.100.9' 00:12:22.430 14:46:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@457 -- # head -n 1 00:12:22.430 14:46:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:12:22.430 192.168.100.9' 00:12:22.430 14:46:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:22.430 14:46:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:12:22.430 192.168.100.9' 00:12:22.430 14:46:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@458 -- # tail -n +2 00:12:22.430 14:46:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@458 -- # head -n 1 00:12:22.430 14:46:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:22.430 14:46:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:12:22.430 14:46:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:22.430 14:46:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:12:22.430 14:46:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:12:22.430 14:46:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:12:22.430 14:46:56 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:12:22.430 14:46:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:22.430 14:46:56 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:22.430 14:46:56 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:22.430 14:46:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=2783979 00:12:22.430 14:46:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 2783979 00:12:22.430 14:46:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:12:22.430 14:46:56 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 2783979 ']' 00:12:22.430 14:46:56 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:22.430 14:46:56 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:22.430 14:46:56 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:22.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:22.430 14:46:56 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:22.430 14:46:56 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:22.430 [2024-07-15 14:46:56.099084] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:12:22.430 [2024-07-15 14:46:56.099130] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:22.430 EAL: No free 2048 kB hugepages reported on node 1 00:12:22.430 [2024-07-15 14:46:56.155037] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:22.430 [2024-07-15 14:46:56.233043] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:22.430 [2024-07-15 14:46:56.233081] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:22.430 [2024-07-15 14:46:56.233088] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:22.430 [2024-07-15 14:46:56.233094] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:22.430 [2024-07-15 14:46:56.233098] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:22.430 [2024-07-15 14:46:56.233142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:22.430 [2024-07-15 14:46:56.233236] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:22.430 [2024-07-15 14:46:56.233238] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:22.995 14:46:56 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:22.995 14:46:56 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:12:22.995 14:46:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:22.995 14:46:56 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:22.995 14:46:56 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:23.253 14:46:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:23.253 14:46:56 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:23.253 [2024-07-15 14:46:57.108587] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x72ff00/0x7343f0) succeed. 00:12:23.253 [2024-07-15 14:46:57.117460] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x7314a0/0x775a80) succeed. 00:12:23.511 14:46:57 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:23.768 14:46:57 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:12:23.768 14:46:57 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:23.768 14:46:57 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:12:23.768 14:46:57 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:12:24.025 14:46:57 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:12:24.281 14:46:57 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=4f1a9991-9419-4f7b-9f08-4c68f9f49806 00:12:24.281 14:46:57 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4f1a9991-9419-4f7b-9f08-4c68f9f49806 lvol 20 00:12:24.281 14:46:58 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=562498e4-0358-4d3d-9240-913d0a8d7dee 00:12:24.281 14:46:58 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:24.537 14:46:58 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 562498e4-0358-4d3d-9240-913d0a8d7dee 00:12:24.795 14:46:58 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:12:24.795 [2024-07-15 14:46:58.685982] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:24.795 14:46:58 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:12:25.052 14:46:58 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2784470 00:12:25.052 14:46:58 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:12:25.052 14:46:58 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:12:25.052 EAL: No free 2048 kB hugepages reported on node 1 00:12:26.424 14:46:59 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 562498e4-0358-4d3d-9240-913d0a8d7dee MY_SNAPSHOT 00:12:26.424 14:47:00 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=adc1d214-4013-4352-bee8-15425a5c28d3 00:12:26.424 14:47:00 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 562498e4-0358-4d3d-9240-913d0a8d7dee 30 00:12:26.424 14:47:00 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone adc1d214-4013-4352-bee8-15425a5c28d3 MY_CLONE 00:12:26.682 14:47:00 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=c6b2d3a6-23f5-4490-b3f8-c39dcc6fbb9a 00:12:26.682 14:47:00 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate c6b2d3a6-23f5-4490-b3f8-c39dcc6fbb9a 00:12:26.940 14:47:00 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2784470 00:12:36.899 Initializing NVMe Controllers 00:12:36.899 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:12:36.899 Controller IO queue size 128, less than required. 00:12:36.899 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:36.899 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:12:36.899 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:12:36.899 Initialization complete. Launching workers. 00:12:36.899 ======================================================== 00:12:36.899 Latency(us) 00:12:36.899 Device Information : IOPS MiB/s Average min max 00:12:36.899 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 16762.70 65.48 7637.85 2074.88 50502.75 00:12:36.900 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 16784.50 65.56 7627.76 3534.36 47336.72 00:12:36.900 ======================================================== 00:12:36.900 Total : 33547.19 131.04 7632.80 2074.88 50502.75 00:12:36.900 00:12:36.900 14:47:10 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:36.900 14:47:10 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 562498e4-0358-4d3d-9240-913d0a8d7dee 00:12:36.900 14:47:10 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4f1a9991-9419-4f7b-9f08-4c68f9f49806 00:12:37.156 14:47:10 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:12:37.156 14:47:10 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:12:37.156 14:47:10 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:12:37.156 14:47:10 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:37.156 14:47:10 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:12:37.156 14:47:10 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:12:37.156 14:47:10 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:12:37.156 14:47:10 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:12:37.156 14:47:10 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:37.156 14:47:10 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:12:37.156 rmmod nvme_rdma 00:12:37.156 rmmod nvme_fabrics 00:12:37.156 14:47:10 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:37.156 14:47:10 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:12:37.156 14:47:10 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:12:37.156 14:47:10 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 2783979 ']' 00:12:37.156 14:47:10 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 2783979 00:12:37.156 14:47:10 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 2783979 ']' 00:12:37.156 14:47:10 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 2783979 00:12:37.156 14:47:10 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:12:37.156 14:47:10 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:37.156 14:47:10 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2783979 00:12:37.156 14:47:10 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:37.156 14:47:10 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:37.156 14:47:10 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2783979' 00:12:37.156 killing process with pid 2783979 00:12:37.156 14:47:10 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 2783979 00:12:37.156 14:47:10 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 2783979 00:12:37.414 14:47:11 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:37.414 14:47:11 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:12:37.414 00:12:37.414 real 0m20.439s 00:12:37.414 user 1m10.977s 00:12:37.414 sys 0m4.958s 00:12:37.414 14:47:11 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:37.414 14:47:11 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:37.414 ************************************ 00:12:37.414 END TEST nvmf_lvol 00:12:37.414 ************************************ 00:12:37.414 14:47:11 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:12:37.414 14:47:11 nvmf_rdma -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:12:37.414 14:47:11 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:37.414 14:47:11 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:37.414 14:47:11 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:12:37.414 ************************************ 00:12:37.414 START TEST nvmf_lvs_grow 00:12:37.414 ************************************ 00:12:37.414 14:47:11 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:12:37.671 * Looking for test storage... 00:12:37.671 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:37.671 14:47:11 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:37.671 14:47:11 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:12:37.671 14:47:11 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:37.671 14:47:11 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:37.671 14:47:11 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:37.671 14:47:11 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:37.671 14:47:11 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:37.671 14:47:11 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:37.671 14:47:11 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:37.671 14:47:11 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:37.671 14:47:11 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:37.671 14:47:11 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:37.671 14:47:11 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:12:37.671 14:47:11 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:12:37.671 14:47:11 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:37.671 14:47:11 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:37.671 14:47:11 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:37.671 14:47:11 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:37.671 14:47:11 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:37.671 14:47:11 nvmf_rdma.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:37.671 14:47:11 nvmf_rdma.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:37.671 14:47:11 nvmf_rdma.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:37.671 14:47:11 nvmf_rdma.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.671 14:47:11 nvmf_rdma.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.671 14:47:11 nvmf_rdma.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.671 14:47:11 nvmf_rdma.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:12:37.671 14:47:11 nvmf_rdma.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.671 14:47:11 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:12:37.671 14:47:11 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:37.671 14:47:11 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:37.671 14:47:11 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:37.671 14:47:11 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:37.671 14:47:11 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:37.671 14:47:11 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:37.671 14:47:11 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:37.671 14:47:11 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:37.671 14:47:11 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:37.671 14:47:11 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:37.671 14:47:11 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:12:37.671 14:47:11 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:12:37.671 14:47:11 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:37.671 14:47:11 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:37.671 14:47:11 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:37.671 14:47:11 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:37.671 14:47:11 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:37.671 14:47:11 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:37.671 14:47:11 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:37.671 14:47:11 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:37.671 14:47:11 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:37.671 14:47:11 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:12:37.671 14:47:11 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:42.935 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:42.935 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:12:42.935 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:42.935 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:42.935 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:42.935 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:42.935 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:42.935 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:12:42.935 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:42.935 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:12:42.935 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:12:42.935 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:12:42.935 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:12:42.935 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:12:42.935 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:12:42.935 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:42.935 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:42.935 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:42.935 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:42.935 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:42.935 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:42.935 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:42.935 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:42.935 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:42.935 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:42.935 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:42.935 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:42.935 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:12:42.935 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:12:42.935 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:12:42.935 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:12:42.935 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:12:42.935 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:42.935 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:42.935 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:12:42.935 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:12:42.935 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:42.935 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:42.935 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:42.935 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:42.935 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:42.935 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:42.935 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:42.935 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:12:42.935 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:12:42.935 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:42.935 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:42.935 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:42.935 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:42.935 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:12:42.936 Found net devices under 0000:da:00.0: mlx_0_0 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:12:42.936 Found net devices under 0000:da:00.1: mlx_0_1 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@420 -- # rdma_device_init 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@58 -- # uname 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@62 -- # modprobe ib_cm 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@63 -- # modprobe ib_core 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@64 -- # modprobe ib_umad 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@66 -- # modprobe iw_cm 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@502 -- # allocate_nic_ips 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@73 -- # get_rdma_if_list 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@105 -- # continue 2 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@105 -- # continue 2 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:12:42.936 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:42.936 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:12:42.936 altname enp218s0f0np0 00:12:42.936 altname ens818f0np0 00:12:42.936 inet 192.168.100.8/24 scope global mlx_0_0 00:12:42.936 valid_lft forever preferred_lft forever 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:12:42.936 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:42.936 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:12:42.936 altname enp218s0f1np1 00:12:42.936 altname ens818f1np1 00:12:42.936 inet 192.168.100.9/24 scope global mlx_0_1 00:12:42.936 valid_lft forever preferred_lft forever 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@86 -- # get_rdma_if_list 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@105 -- # continue 2 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@105 -- # continue 2 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:12:42.936 192.168.100.9' 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:12:42.936 192.168.100.9' 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@457 -- # head -n 1 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:12:42.936 192.168.100.9' 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@458 -- # tail -n +2 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@458 -- # head -n 1 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:12:42.936 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:12:43.195 14:47:16 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:12:43.195 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:43.195 14:47:16 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:43.196 14:47:16 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:43.196 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=2789596 00:12:43.196 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 2789596 00:12:43.196 14:47:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:43.196 14:47:16 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 2789596 ']' 00:12:43.196 14:47:16 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:43.196 14:47:16 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:43.196 14:47:16 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:43.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:43.196 14:47:16 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:43.196 14:47:16 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:43.196 [2024-07-15 14:47:16.922754] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:12:43.196 [2024-07-15 14:47:16.922804] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:43.196 EAL: No free 2048 kB hugepages reported on node 1 00:12:43.196 [2024-07-15 14:47:16.978464] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:43.196 [2024-07-15 14:47:17.053181] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:43.196 [2024-07-15 14:47:17.053221] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:43.196 [2024-07-15 14:47:17.053227] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:43.196 [2024-07-15 14:47:17.053233] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:43.196 [2024-07-15 14:47:17.053238] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:43.196 [2024-07-15 14:47:17.053263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:44.130 14:47:17 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:44.130 14:47:17 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:12:44.130 14:47:17 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:44.130 14:47:17 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:44.130 14:47:17 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:44.130 14:47:17 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:44.130 14:47:17 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:44.130 [2024-07-15 14:47:17.917295] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x21c1910/0x21c5e00) succeed. 00:12:44.130 [2024-07-15 14:47:17.926295] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x21c2e10/0x2207490) succeed. 00:12:44.130 14:47:17 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:12:44.130 14:47:17 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:44.130 14:47:17 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:44.130 14:47:17 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:44.130 ************************************ 00:12:44.130 START TEST lvs_grow_clean 00:12:44.130 ************************************ 00:12:44.130 14:47:18 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:12:44.130 14:47:18 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:12:44.130 14:47:18 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:12:44.130 14:47:18 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:12:44.130 14:47:18 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:12:44.130 14:47:18 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:12:44.130 14:47:18 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:12:44.130 14:47:18 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:44.130 14:47:18 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:44.130 14:47:18 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:44.389 14:47:18 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:12:44.389 14:47:18 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:12:44.647 14:47:18 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=e1e0843f-c7e1-4846-9fcd-b2f386fc45d5 00:12:44.647 14:47:18 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e1e0843f-c7e1-4846-9fcd-b2f386fc45d5 00:12:44.647 14:47:18 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:12:44.906 14:47:18 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:12:44.906 14:47:18 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:12:44.906 14:47:18 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e1e0843f-c7e1-4846-9fcd-b2f386fc45d5 lvol 150 00:12:44.906 14:47:18 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=08dc8a28-9910-4e06-9048-ded9d5b14b95 00:12:44.906 14:47:18 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:44.906 14:47:18 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:12:45.164 [2024-07-15 14:47:18.928002] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:12:45.164 [2024-07-15 14:47:18.928056] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:12:45.164 true 00:12:45.164 14:47:18 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e1e0843f-c7e1-4846-9fcd-b2f386fc45d5 00:12:45.164 14:47:18 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:12:45.422 14:47:19 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:12:45.422 14:47:19 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:45.422 14:47:19 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 08dc8a28-9910-4e06-9048-ded9d5b14b95 00:12:45.681 14:47:19 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:12:45.681 [2024-07-15 14:47:19.586236] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:45.681 14:47:19 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:12:45.939 14:47:19 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2790103 00:12:45.939 14:47:19 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:12:45.939 14:47:19 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:45.939 14:47:19 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2790103 /var/tmp/bdevperf.sock 00:12:45.939 14:47:19 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 2790103 ']' 00:12:45.939 14:47:19 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:45.939 14:47:19 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:45.939 14:47:19 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:45.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:45.939 14:47:19 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:45.939 14:47:19 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:12:45.939 [2024-07-15 14:47:19.798337] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:12:45.939 [2024-07-15 14:47:19.798380] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2790103 ] 00:12:45.939 EAL: No free 2048 kB hugepages reported on node 1 00:12:45.939 [2024-07-15 14:47:19.853552] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:46.197 [2024-07-15 14:47:19.932230] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:46.857 14:47:20 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:46.857 14:47:20 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:12:46.857 14:47:20 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:12:47.116 Nvme0n1 00:12:47.116 14:47:20 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:12:47.116 [ 00:12:47.116 { 00:12:47.116 "name": "Nvme0n1", 00:12:47.116 "aliases": [ 00:12:47.116 "08dc8a28-9910-4e06-9048-ded9d5b14b95" 00:12:47.116 ], 00:12:47.116 "product_name": "NVMe disk", 00:12:47.116 "block_size": 4096, 00:12:47.116 "num_blocks": 38912, 00:12:47.116 "uuid": "08dc8a28-9910-4e06-9048-ded9d5b14b95", 00:12:47.116 "assigned_rate_limits": { 00:12:47.116 "rw_ios_per_sec": 0, 00:12:47.116 "rw_mbytes_per_sec": 0, 00:12:47.116 "r_mbytes_per_sec": 0, 00:12:47.116 "w_mbytes_per_sec": 0 00:12:47.116 }, 00:12:47.116 "claimed": false, 00:12:47.116 "zoned": false, 00:12:47.116 "supported_io_types": { 00:12:47.116 "read": true, 00:12:47.116 "write": true, 00:12:47.116 "unmap": true, 00:12:47.116 "flush": true, 00:12:47.116 "reset": true, 00:12:47.116 "nvme_admin": true, 00:12:47.116 "nvme_io": true, 00:12:47.116 "nvme_io_md": false, 00:12:47.116 "write_zeroes": true, 00:12:47.116 "zcopy": false, 00:12:47.116 "get_zone_info": false, 00:12:47.116 "zone_management": false, 00:12:47.116 "zone_append": false, 00:12:47.116 "compare": true, 00:12:47.116 "compare_and_write": true, 00:12:47.116 "abort": true, 00:12:47.116 "seek_hole": false, 00:12:47.116 "seek_data": false, 00:12:47.116 "copy": true, 00:12:47.116 "nvme_iov_md": false 00:12:47.116 }, 00:12:47.116 "memory_domains": [ 00:12:47.116 { 00:12:47.116 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:12:47.116 "dma_device_type": 0 00:12:47.116 } 00:12:47.116 ], 00:12:47.116 "driver_specific": { 00:12:47.116 "nvme": [ 00:12:47.116 { 00:12:47.116 "trid": { 00:12:47.116 "trtype": "RDMA", 00:12:47.116 "adrfam": "IPv4", 00:12:47.116 "traddr": "192.168.100.8", 00:12:47.116 "trsvcid": "4420", 00:12:47.116 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:12:47.116 }, 00:12:47.116 "ctrlr_data": { 00:12:47.116 "cntlid": 1, 00:12:47.116 "vendor_id": "0x8086", 00:12:47.116 "model_number": "SPDK bdev Controller", 00:12:47.116 "serial_number": "SPDK0", 00:12:47.116 "firmware_revision": "24.09", 00:12:47.116 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:47.116 "oacs": { 00:12:47.116 "security": 0, 00:12:47.116 "format": 0, 00:12:47.116 "firmware": 0, 00:12:47.116 "ns_manage": 0 00:12:47.116 }, 00:12:47.116 "multi_ctrlr": true, 00:12:47.116 "ana_reporting": false 00:12:47.116 }, 00:12:47.116 "vs": { 00:12:47.116 "nvme_version": "1.3" 00:12:47.116 }, 00:12:47.116 "ns_data": { 00:12:47.116 "id": 1, 00:12:47.116 "can_share": true 00:12:47.116 } 00:12:47.117 } 00:12:47.117 ], 00:12:47.117 "mp_policy": "active_passive" 00:12:47.117 } 00:12:47.117 } 00:12:47.117 ] 00:12:47.117 14:47:20 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2790319 00:12:47.117 14:47:20 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:47.117 14:47:20 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:12:47.375 Running I/O for 10 seconds... 00:12:48.313 Latency(us) 00:12:48.313 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:48.313 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:48.313 Nvme0n1 : 1.00 34817.00 136.00 0.00 0.00 0.00 0.00 0.00 00:12:48.313 =================================================================================================================== 00:12:48.313 Total : 34817.00 136.00 0.00 0.00 0.00 0.00 0.00 00:12:48.313 00:12:49.248 14:47:22 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u e1e0843f-c7e1-4846-9fcd-b2f386fc45d5 00:12:49.248 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:49.248 Nvme0n1 : 2.00 35057.50 136.94 0.00 0.00 0.00 0.00 0.00 00:12:49.248 =================================================================================================================== 00:12:49.248 Total : 35057.50 136.94 0.00 0.00 0.00 0.00 0.00 00:12:49.248 00:12:49.248 true 00:12:49.248 14:47:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e1e0843f-c7e1-4846-9fcd-b2f386fc45d5 00:12:49.248 14:47:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:12:49.505 14:47:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:12:49.505 14:47:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:12:49.505 14:47:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2790319 00:12:50.438 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:50.438 Nvme0n1 : 3.00 35125.67 137.21 0.00 0.00 0.00 0.00 0.00 00:12:50.438 =================================================================================================================== 00:12:50.438 Total : 35125.67 137.21 0.00 0.00 0.00 0.00 0.00 00:12:50.438 00:12:51.399 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:51.399 Nvme0n1 : 4.00 35224.00 137.59 0.00 0.00 0.00 0.00 0.00 00:12:51.399 =================================================================================================================== 00:12:51.399 Total : 35224.00 137.59 0.00 0.00 0.00 0.00 0.00 00:12:51.399 00:12:52.330 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:52.330 Nvme0n1 : 5.00 35295.60 137.87 0.00 0.00 0.00 0.00 0.00 00:12:52.330 =================================================================================================================== 00:12:52.330 Total : 35295.60 137.87 0.00 0.00 0.00 0.00 0.00 00:12:52.330 00:12:53.265 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:53.265 Nvme0n1 : 6.00 35327.50 138.00 0.00 0.00 0.00 0.00 0.00 00:12:53.265 =================================================================================================================== 00:12:53.265 Total : 35327.50 138.00 0.00 0.00 0.00 0.00 0.00 00:12:53.265 00:12:54.201 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:54.201 Nvme0n1 : 7.00 35351.43 138.09 0.00 0.00 0.00 0.00 0.00 00:12:54.201 =================================================================================================================== 00:12:54.201 Total : 35351.43 138.09 0.00 0.00 0.00 0.00 0.00 00:12:54.201 00:12:55.578 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:55.578 Nvme0n1 : 8.00 35376.62 138.19 0.00 0.00 0.00 0.00 0.00 00:12:55.578 =================================================================================================================== 00:12:55.578 Total : 35376.62 138.19 0.00 0.00 0.00 0.00 0.00 00:12:55.578 00:12:56.513 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:56.513 Nvme0n1 : 9.00 35371.22 138.17 0.00 0.00 0.00 0.00 0.00 00:12:56.513 =================================================================================================================== 00:12:56.513 Total : 35371.22 138.17 0.00 0.00 0.00 0.00 0.00 00:12:56.513 00:12:57.450 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:57.450 Nvme0n1 : 10.00 35334.00 138.02 0.00 0.00 0.00 0.00 0.00 00:12:57.450 =================================================================================================================== 00:12:57.450 Total : 35334.00 138.02 0.00 0.00 0.00 0.00 0.00 00:12:57.450 00:12:57.450 00:12:57.450 Latency(us) 00:12:57.450 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:57.450 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:57.450 Nvme0n1 : 10.00 35334.28 138.02 0.00 0.00 3619.54 2168.93 8051.57 00:12:57.450 =================================================================================================================== 00:12:57.450 Total : 35334.28 138.02 0.00 0.00 3619.54 2168.93 8051.57 00:12:57.450 0 00:12:57.450 14:47:31 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2790103 00:12:57.450 14:47:31 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 2790103 ']' 00:12:57.450 14:47:31 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 2790103 00:12:57.450 14:47:31 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:12:57.450 14:47:31 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:57.450 14:47:31 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2790103 00:12:57.450 14:47:31 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:57.450 14:47:31 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:57.450 14:47:31 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2790103' 00:12:57.450 killing process with pid 2790103 00:12:57.450 14:47:31 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 2790103 00:12:57.450 Received shutdown signal, test time was about 10.000000 seconds 00:12:57.450 00:12:57.450 Latency(us) 00:12:57.450 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:57.450 =================================================================================================================== 00:12:57.450 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:57.450 14:47:31 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 2790103 00:12:57.450 14:47:31 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:12:57.708 14:47:31 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:57.966 14:47:31 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e1e0843f-c7e1-4846-9fcd-b2f386fc45d5 00:12:57.966 14:47:31 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:12:58.225 14:47:31 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:12:58.225 14:47:31 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:12:58.225 14:47:31 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:58.225 [2024-07-15 14:47:32.044360] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:58.225 14:47:32 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e1e0843f-c7e1-4846-9fcd-b2f386fc45d5 00:12:58.225 14:47:32 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:12:58.225 14:47:32 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e1e0843f-c7e1-4846-9fcd-b2f386fc45d5 00:12:58.225 14:47:32 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:58.225 14:47:32 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:58.225 14:47:32 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:58.225 14:47:32 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:58.225 14:47:32 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:58.225 14:47:32 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:58.225 14:47:32 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:58.225 14:47:32 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:12:58.225 14:47:32 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e1e0843f-c7e1-4846-9fcd-b2f386fc45d5 00:12:58.483 request: 00:12:58.483 { 00:12:58.483 "uuid": "e1e0843f-c7e1-4846-9fcd-b2f386fc45d5", 00:12:58.483 "method": "bdev_lvol_get_lvstores", 00:12:58.483 "req_id": 1 00:12:58.483 } 00:12:58.483 Got JSON-RPC error response 00:12:58.483 response: 00:12:58.483 { 00:12:58.483 "code": -19, 00:12:58.483 "message": "No such device" 00:12:58.483 } 00:12:58.483 14:47:32 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:12:58.483 14:47:32 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:58.483 14:47:32 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:58.483 14:47:32 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:58.483 14:47:32 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:58.742 aio_bdev 00:12:58.742 14:47:32 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 08dc8a28-9910-4e06-9048-ded9d5b14b95 00:12:58.742 14:47:32 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=08dc8a28-9910-4e06-9048-ded9d5b14b95 00:12:58.742 14:47:32 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:58.742 14:47:32 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:12:58.742 14:47:32 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:58.742 14:47:32 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:58.742 14:47:32 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:58.742 14:47:32 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 08dc8a28-9910-4e06-9048-ded9d5b14b95 -t 2000 00:12:59.001 [ 00:12:59.001 { 00:12:59.001 "name": "08dc8a28-9910-4e06-9048-ded9d5b14b95", 00:12:59.001 "aliases": [ 00:12:59.001 "lvs/lvol" 00:12:59.001 ], 00:12:59.001 "product_name": "Logical Volume", 00:12:59.001 "block_size": 4096, 00:12:59.001 "num_blocks": 38912, 00:12:59.001 "uuid": "08dc8a28-9910-4e06-9048-ded9d5b14b95", 00:12:59.001 "assigned_rate_limits": { 00:12:59.001 "rw_ios_per_sec": 0, 00:12:59.001 "rw_mbytes_per_sec": 0, 00:12:59.001 "r_mbytes_per_sec": 0, 00:12:59.001 "w_mbytes_per_sec": 0 00:12:59.001 }, 00:12:59.001 "claimed": false, 00:12:59.001 "zoned": false, 00:12:59.001 "supported_io_types": { 00:12:59.001 "read": true, 00:12:59.001 "write": true, 00:12:59.001 "unmap": true, 00:12:59.001 "flush": false, 00:12:59.001 "reset": true, 00:12:59.001 "nvme_admin": false, 00:12:59.001 "nvme_io": false, 00:12:59.001 "nvme_io_md": false, 00:12:59.001 "write_zeroes": true, 00:12:59.001 "zcopy": false, 00:12:59.001 "get_zone_info": false, 00:12:59.001 "zone_management": false, 00:12:59.001 "zone_append": false, 00:12:59.001 "compare": false, 00:12:59.001 "compare_and_write": false, 00:12:59.001 "abort": false, 00:12:59.001 "seek_hole": true, 00:12:59.001 "seek_data": true, 00:12:59.001 "copy": false, 00:12:59.001 "nvme_iov_md": false 00:12:59.001 }, 00:12:59.001 "driver_specific": { 00:12:59.001 "lvol": { 00:12:59.001 "lvol_store_uuid": "e1e0843f-c7e1-4846-9fcd-b2f386fc45d5", 00:12:59.001 "base_bdev": "aio_bdev", 00:12:59.001 "thin_provision": false, 00:12:59.001 "num_allocated_clusters": 38, 00:12:59.001 "snapshot": false, 00:12:59.001 "clone": false, 00:12:59.001 "esnap_clone": false 00:12:59.001 } 00:12:59.001 } 00:12:59.001 } 00:12:59.001 ] 00:12:59.001 14:47:32 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:12:59.001 14:47:32 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e1e0843f-c7e1-4846-9fcd-b2f386fc45d5 00:12:59.001 14:47:32 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:12:59.001 14:47:32 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:12:59.001 14:47:32 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e1e0843f-c7e1-4846-9fcd-b2f386fc45d5 00:12:59.001 14:47:32 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:12:59.259 14:47:33 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:12:59.259 14:47:33 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 08dc8a28-9910-4e06-9048-ded9d5b14b95 00:12:59.516 14:47:33 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e1e0843f-c7e1-4846-9fcd-b2f386fc45d5 00:12:59.516 14:47:33 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:59.774 14:47:33 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:59.775 00:12:59.775 real 0m15.573s 00:12:59.775 user 0m15.562s 00:12:59.775 sys 0m0.991s 00:12:59.775 14:47:33 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:59.775 14:47:33 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:12:59.775 ************************************ 00:12:59.775 END TEST lvs_grow_clean 00:12:59.775 ************************************ 00:12:59.775 14:47:33 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:12:59.775 14:47:33 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:12:59.775 14:47:33 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:59.775 14:47:33 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:59.775 14:47:33 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:59.775 ************************************ 00:12:59.775 START TEST lvs_grow_dirty 00:12:59.775 ************************************ 00:12:59.775 14:47:33 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:12:59.775 14:47:33 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:12:59.775 14:47:33 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:12:59.775 14:47:33 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:12:59.775 14:47:33 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:12:59.775 14:47:33 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:12:59.775 14:47:33 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:12:59.775 14:47:33 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:59.775 14:47:33 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:59.775 14:47:33 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:00.033 14:47:33 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:13:00.033 14:47:33 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:13:00.291 14:47:34 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=35bf1e5b-21f0-4d96-be5b-5101bad47580 00:13:00.291 14:47:34 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 35bf1e5b-21f0-4d96-be5b-5101bad47580 00:13:00.291 14:47:34 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:13:00.550 14:47:34 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:13:00.550 14:47:34 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:13:00.550 14:47:34 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 35bf1e5b-21f0-4d96-be5b-5101bad47580 lvol 150 00:13:00.550 14:47:34 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=5ed8d09f-d060-4dde-8c42-c8abafad993a 00:13:00.550 14:47:34 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:00.550 14:47:34 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:13:00.809 [2024-07-15 14:47:34.530077] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:13:00.809 [2024-07-15 14:47:34.530132] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:13:00.809 true 00:13:00.809 14:47:34 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 35bf1e5b-21f0-4d96-be5b-5101bad47580 00:13:00.809 14:47:34 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:13:00.809 14:47:34 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:13:00.809 14:47:34 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:01.068 14:47:34 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5ed8d09f-d060-4dde-8c42-c8abafad993a 00:13:01.327 14:47:35 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:13:01.327 [2024-07-15 14:47:35.172202] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:01.327 14:47:35 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:13:01.587 14:47:35 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2792701 00:13:01.587 14:47:35 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:13:01.587 14:47:35 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:01.587 14:47:35 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2792701 /var/tmp/bdevperf.sock 00:13:01.587 14:47:35 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 2792701 ']' 00:13:01.587 14:47:35 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:01.587 14:47:35 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:01.587 14:47:35 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:01.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:01.587 14:47:35 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:01.587 14:47:35 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:01.587 [2024-07-15 14:47:35.392866] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:13:01.587 [2024-07-15 14:47:35.392909] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2792701 ] 00:13:01.587 EAL: No free 2048 kB hugepages reported on node 1 00:13:01.587 [2024-07-15 14:47:35.445749] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:01.845 [2024-07-15 14:47:35.518144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:02.414 14:47:36 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:02.414 14:47:36 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:13:02.414 14:47:36 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:13:02.673 Nvme0n1 00:13:02.673 14:47:36 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:13:02.932 [ 00:13:02.932 { 00:13:02.932 "name": "Nvme0n1", 00:13:02.932 "aliases": [ 00:13:02.932 "5ed8d09f-d060-4dde-8c42-c8abafad993a" 00:13:02.932 ], 00:13:02.932 "product_name": "NVMe disk", 00:13:02.932 "block_size": 4096, 00:13:02.932 "num_blocks": 38912, 00:13:02.932 "uuid": "5ed8d09f-d060-4dde-8c42-c8abafad993a", 00:13:02.932 "assigned_rate_limits": { 00:13:02.932 "rw_ios_per_sec": 0, 00:13:02.932 "rw_mbytes_per_sec": 0, 00:13:02.932 "r_mbytes_per_sec": 0, 00:13:02.932 "w_mbytes_per_sec": 0 00:13:02.932 }, 00:13:02.932 "claimed": false, 00:13:02.932 "zoned": false, 00:13:02.932 "supported_io_types": { 00:13:02.932 "read": true, 00:13:02.932 "write": true, 00:13:02.932 "unmap": true, 00:13:02.932 "flush": true, 00:13:02.932 "reset": true, 00:13:02.932 "nvme_admin": true, 00:13:02.932 "nvme_io": true, 00:13:02.932 "nvme_io_md": false, 00:13:02.932 "write_zeroes": true, 00:13:02.932 "zcopy": false, 00:13:02.932 "get_zone_info": false, 00:13:02.932 "zone_management": false, 00:13:02.932 "zone_append": false, 00:13:02.932 "compare": true, 00:13:02.932 "compare_and_write": true, 00:13:02.932 "abort": true, 00:13:02.932 "seek_hole": false, 00:13:02.932 "seek_data": false, 00:13:02.932 "copy": true, 00:13:02.932 "nvme_iov_md": false 00:13:02.932 }, 00:13:02.932 "memory_domains": [ 00:13:02.932 { 00:13:02.933 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:13:02.933 "dma_device_type": 0 00:13:02.933 } 00:13:02.933 ], 00:13:02.933 "driver_specific": { 00:13:02.933 "nvme": [ 00:13:02.933 { 00:13:02.933 "trid": { 00:13:02.933 "trtype": "RDMA", 00:13:02.933 "adrfam": "IPv4", 00:13:02.933 "traddr": "192.168.100.8", 00:13:02.933 "trsvcid": "4420", 00:13:02.933 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:13:02.933 }, 00:13:02.933 "ctrlr_data": { 00:13:02.933 "cntlid": 1, 00:13:02.933 "vendor_id": "0x8086", 00:13:02.933 "model_number": "SPDK bdev Controller", 00:13:02.933 "serial_number": "SPDK0", 00:13:02.933 "firmware_revision": "24.09", 00:13:02.933 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:02.933 "oacs": { 00:13:02.933 "security": 0, 00:13:02.933 "format": 0, 00:13:02.933 "firmware": 0, 00:13:02.933 "ns_manage": 0 00:13:02.933 }, 00:13:02.933 "multi_ctrlr": true, 00:13:02.933 "ana_reporting": false 00:13:02.933 }, 00:13:02.933 "vs": { 00:13:02.933 "nvme_version": "1.3" 00:13:02.933 }, 00:13:02.933 "ns_data": { 00:13:02.933 "id": 1, 00:13:02.933 "can_share": true 00:13:02.933 } 00:13:02.933 } 00:13:02.933 ], 00:13:02.933 "mp_policy": "active_passive" 00:13:02.933 } 00:13:02.933 } 00:13:02.933 ] 00:13:02.933 14:47:36 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2792939 00:13:02.933 14:47:36 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:13:02.933 14:47:36 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:02.933 Running I/O for 10 seconds... 00:13:03.869 Latency(us) 00:13:03.869 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:03.869 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:03.869 Nvme0n1 : 1.00 34720.00 135.62 0.00 0.00 0.00 0.00 0.00 00:13:03.869 =================================================================================================================== 00:13:03.869 Total : 34720.00 135.62 0.00 0.00 0.00 0.00 0.00 00:13:03.869 00:13:04.804 14:47:38 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 35bf1e5b-21f0-4d96-be5b-5101bad47580 00:13:05.063 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:05.063 Nvme0n1 : 2.00 35025.50 136.82 0.00 0.00 0.00 0.00 0.00 00:13:05.063 =================================================================================================================== 00:13:05.063 Total : 35025.50 136.82 0.00 0.00 0.00 0.00 0.00 00:13:05.063 00:13:05.063 true 00:13:05.063 14:47:38 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 35bf1e5b-21f0-4d96-be5b-5101bad47580 00:13:05.063 14:47:38 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:13:05.322 14:47:38 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:13:05.322 14:47:39 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:13:05.322 14:47:39 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2792939 00:13:05.889 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:05.889 Nvme0n1 : 3.00 35053.33 136.93 0.00 0.00 0.00 0.00 0.00 00:13:05.889 =================================================================================================================== 00:13:05.889 Total : 35053.33 136.93 0.00 0.00 0.00 0.00 0.00 00:13:05.889 00:13:06.825 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:06.825 Nvme0n1 : 4.00 35113.75 137.16 0.00 0.00 0.00 0.00 0.00 00:13:06.825 =================================================================================================================== 00:13:06.825 Total : 35113.75 137.16 0.00 0.00 0.00 0.00 0.00 00:13:06.825 00:13:08.201 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:08.201 Nvme0n1 : 5.00 35192.60 137.47 0.00 0.00 0.00 0.00 0.00 00:13:08.201 =================================================================================================================== 00:13:08.201 Total : 35192.60 137.47 0.00 0.00 0.00 0.00 0.00 00:13:08.201 00:13:09.134 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:09.134 Nvme0n1 : 6.00 35257.83 137.73 0.00 0.00 0.00 0.00 0.00 00:13:09.134 =================================================================================================================== 00:13:09.134 Total : 35257.83 137.73 0.00 0.00 0.00 0.00 0.00 00:13:09.134 00:13:10.065 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:10.065 Nvme0n1 : 7.00 35296.86 137.88 0.00 0.00 0.00 0.00 0.00 00:13:10.065 =================================================================================================================== 00:13:10.065 Total : 35296.86 137.88 0.00 0.00 0.00 0.00 0.00 00:13:10.065 00:13:10.998 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:10.998 Nvme0n1 : 8.00 35328.88 138.00 0.00 0.00 0.00 0.00 0.00 00:13:10.998 =================================================================================================================== 00:13:10.998 Total : 35328.88 138.00 0.00 0.00 0.00 0.00 0.00 00:13:10.998 00:13:11.932 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:11.932 Nvme0n1 : 9.00 35357.00 138.11 0.00 0.00 0.00 0.00 0.00 00:13:11.932 =================================================================================================================== 00:13:11.932 Total : 35357.00 138.11 0.00 0.00 0.00 0.00 0.00 00:13:11.932 00:13:12.868 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:12.868 Nvme0n1 : 10.00 35378.60 138.20 0.00 0.00 0.00 0.00 0.00 00:13:12.868 =================================================================================================================== 00:13:12.868 Total : 35378.60 138.20 0.00 0.00 0.00 0.00 0.00 00:13:12.868 00:13:12.868 00:13:12.868 Latency(us) 00:13:12.868 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:12.868 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:12.868 Nvme0n1 : 10.00 35379.17 138.20 0.00 0.00 3615.09 2699.46 10298.51 00:13:12.868 =================================================================================================================== 00:13:12.868 Total : 35379.17 138.20 0.00 0.00 3615.09 2699.46 10298.51 00:13:12.868 0 00:13:12.868 14:47:46 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2792701 00:13:12.868 14:47:46 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 2792701 ']' 00:13:12.868 14:47:46 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 2792701 00:13:12.868 14:47:46 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:13:12.868 14:47:46 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:12.868 14:47:46 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2792701 00:13:13.128 14:47:46 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:13.128 14:47:46 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:13.128 14:47:46 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2792701' 00:13:13.128 killing process with pid 2792701 00:13:13.128 14:47:46 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 2792701 00:13:13.128 Received shutdown signal, test time was about 10.000000 seconds 00:13:13.128 00:13:13.128 Latency(us) 00:13:13.128 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:13.128 =================================================================================================================== 00:13:13.128 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:13.128 14:47:46 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 2792701 00:13:13.128 14:47:47 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:13:13.387 14:47:47 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:13.646 14:47:47 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 35bf1e5b-21f0-4d96-be5b-5101bad47580 00:13:13.646 14:47:47 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:13:13.905 14:47:47 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:13:13.905 14:47:47 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:13:13.905 14:47:47 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2789596 00:13:13.905 14:47:47 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2789596 00:13:13.905 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2789596 Killed "${NVMF_APP[@]}" "$@" 00:13:13.905 14:47:47 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:13:13.905 14:47:47 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:13:13.905 14:47:47 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:13.905 14:47:47 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:13.905 14:47:47 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:13.905 14:47:47 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=2794777 00:13:13.905 14:47:47 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 2794777 00:13:13.905 14:47:47 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:13.905 14:47:47 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 2794777 ']' 00:13:13.905 14:47:47 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:13.905 14:47:47 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:13.905 14:47:47 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:13.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:13.905 14:47:47 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:13.905 14:47:47 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:13.905 [2024-07-15 14:47:47.656514] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:13:13.905 [2024-07-15 14:47:47.656583] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:13.905 EAL: No free 2048 kB hugepages reported on node 1 00:13:13.905 [2024-07-15 14:47:47.712511] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:13.905 [2024-07-15 14:47:47.790382] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:13.905 [2024-07-15 14:47:47.790415] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:13.905 [2024-07-15 14:47:47.790422] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:13.905 [2024-07-15 14:47:47.790428] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:13.905 [2024-07-15 14:47:47.790433] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:13.905 [2024-07-15 14:47:47.790448] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:14.843 14:47:48 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:14.843 14:47:48 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:13:14.843 14:47:48 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:14.843 14:47:48 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:14.843 14:47:48 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:14.843 14:47:48 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:14.843 14:47:48 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:14.843 [2024-07-15 14:47:48.643014] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:13:14.843 [2024-07-15 14:47:48.643100] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:13:14.843 [2024-07-15 14:47:48.643124] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:13:14.843 14:47:48 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:13:14.843 14:47:48 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 5ed8d09f-d060-4dde-8c42-c8abafad993a 00:13:14.843 14:47:48 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=5ed8d09f-d060-4dde-8c42-c8abafad993a 00:13:14.843 14:47:48 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:14.843 14:47:48 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:13:14.843 14:47:48 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:14.843 14:47:48 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:14.843 14:47:48 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:15.102 14:47:48 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 5ed8d09f-d060-4dde-8c42-c8abafad993a -t 2000 00:13:15.102 [ 00:13:15.102 { 00:13:15.102 "name": "5ed8d09f-d060-4dde-8c42-c8abafad993a", 00:13:15.102 "aliases": [ 00:13:15.102 "lvs/lvol" 00:13:15.102 ], 00:13:15.102 "product_name": "Logical Volume", 00:13:15.102 "block_size": 4096, 00:13:15.102 "num_blocks": 38912, 00:13:15.102 "uuid": "5ed8d09f-d060-4dde-8c42-c8abafad993a", 00:13:15.102 "assigned_rate_limits": { 00:13:15.102 "rw_ios_per_sec": 0, 00:13:15.102 "rw_mbytes_per_sec": 0, 00:13:15.102 "r_mbytes_per_sec": 0, 00:13:15.102 "w_mbytes_per_sec": 0 00:13:15.102 }, 00:13:15.102 "claimed": false, 00:13:15.102 "zoned": false, 00:13:15.102 "supported_io_types": { 00:13:15.102 "read": true, 00:13:15.102 "write": true, 00:13:15.102 "unmap": true, 00:13:15.102 "flush": false, 00:13:15.102 "reset": true, 00:13:15.102 "nvme_admin": false, 00:13:15.102 "nvme_io": false, 00:13:15.102 "nvme_io_md": false, 00:13:15.102 "write_zeroes": true, 00:13:15.102 "zcopy": false, 00:13:15.102 "get_zone_info": false, 00:13:15.102 "zone_management": false, 00:13:15.102 "zone_append": false, 00:13:15.102 "compare": false, 00:13:15.102 "compare_and_write": false, 00:13:15.102 "abort": false, 00:13:15.102 "seek_hole": true, 00:13:15.102 "seek_data": true, 00:13:15.102 "copy": false, 00:13:15.102 "nvme_iov_md": false 00:13:15.102 }, 00:13:15.102 "driver_specific": { 00:13:15.102 "lvol": { 00:13:15.102 "lvol_store_uuid": "35bf1e5b-21f0-4d96-be5b-5101bad47580", 00:13:15.102 "base_bdev": "aio_bdev", 00:13:15.102 "thin_provision": false, 00:13:15.102 "num_allocated_clusters": 38, 00:13:15.102 "snapshot": false, 00:13:15.102 "clone": false, 00:13:15.102 "esnap_clone": false 00:13:15.102 } 00:13:15.102 } 00:13:15.102 } 00:13:15.102 ] 00:13:15.102 14:47:48 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:13:15.102 14:47:49 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 35bf1e5b-21f0-4d96-be5b-5101bad47580 00:13:15.102 14:47:49 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:13:15.361 14:47:49 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:13:15.361 14:47:49 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 35bf1e5b-21f0-4d96-be5b-5101bad47580 00:13:15.361 14:47:49 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:13:15.619 14:47:49 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:13:15.619 14:47:49 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:15.619 [2024-07-15 14:47:49.487435] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:13:15.619 14:47:49 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 35bf1e5b-21f0-4d96-be5b-5101bad47580 00:13:15.619 14:47:49 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:13:15.619 14:47:49 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 35bf1e5b-21f0-4d96-be5b-5101bad47580 00:13:15.619 14:47:49 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:15.619 14:47:49 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:15.619 14:47:49 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:15.619 14:47:49 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:15.619 14:47:49 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:15.619 14:47:49 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:15.619 14:47:49 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:15.619 14:47:49 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:13:15.619 14:47:49 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 35bf1e5b-21f0-4d96-be5b-5101bad47580 00:13:15.878 request: 00:13:15.878 { 00:13:15.878 "uuid": "35bf1e5b-21f0-4d96-be5b-5101bad47580", 00:13:15.878 "method": "bdev_lvol_get_lvstores", 00:13:15.878 "req_id": 1 00:13:15.878 } 00:13:15.878 Got JSON-RPC error response 00:13:15.878 response: 00:13:15.878 { 00:13:15.878 "code": -19, 00:13:15.878 "message": "No such device" 00:13:15.878 } 00:13:15.878 14:47:49 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:13:15.878 14:47:49 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:15.878 14:47:49 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:15.878 14:47:49 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:15.878 14:47:49 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:16.137 aio_bdev 00:13:16.137 14:47:49 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 5ed8d09f-d060-4dde-8c42-c8abafad993a 00:13:16.138 14:47:49 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=5ed8d09f-d060-4dde-8c42-c8abafad993a 00:13:16.138 14:47:49 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:16.138 14:47:49 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:13:16.138 14:47:49 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:16.138 14:47:49 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:16.138 14:47:49 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:16.138 14:47:50 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 5ed8d09f-d060-4dde-8c42-c8abafad993a -t 2000 00:13:16.397 [ 00:13:16.397 { 00:13:16.397 "name": "5ed8d09f-d060-4dde-8c42-c8abafad993a", 00:13:16.397 "aliases": [ 00:13:16.397 "lvs/lvol" 00:13:16.397 ], 00:13:16.397 "product_name": "Logical Volume", 00:13:16.397 "block_size": 4096, 00:13:16.397 "num_blocks": 38912, 00:13:16.397 "uuid": "5ed8d09f-d060-4dde-8c42-c8abafad993a", 00:13:16.397 "assigned_rate_limits": { 00:13:16.397 "rw_ios_per_sec": 0, 00:13:16.397 "rw_mbytes_per_sec": 0, 00:13:16.397 "r_mbytes_per_sec": 0, 00:13:16.397 "w_mbytes_per_sec": 0 00:13:16.397 }, 00:13:16.397 "claimed": false, 00:13:16.397 "zoned": false, 00:13:16.397 "supported_io_types": { 00:13:16.397 "read": true, 00:13:16.397 "write": true, 00:13:16.397 "unmap": true, 00:13:16.397 "flush": false, 00:13:16.397 "reset": true, 00:13:16.397 "nvme_admin": false, 00:13:16.397 "nvme_io": false, 00:13:16.397 "nvme_io_md": false, 00:13:16.397 "write_zeroes": true, 00:13:16.397 "zcopy": false, 00:13:16.397 "get_zone_info": false, 00:13:16.397 "zone_management": false, 00:13:16.397 "zone_append": false, 00:13:16.397 "compare": false, 00:13:16.397 "compare_and_write": false, 00:13:16.397 "abort": false, 00:13:16.397 "seek_hole": true, 00:13:16.397 "seek_data": true, 00:13:16.397 "copy": false, 00:13:16.397 "nvme_iov_md": false 00:13:16.397 }, 00:13:16.397 "driver_specific": { 00:13:16.397 "lvol": { 00:13:16.397 "lvol_store_uuid": "35bf1e5b-21f0-4d96-be5b-5101bad47580", 00:13:16.397 "base_bdev": "aio_bdev", 00:13:16.397 "thin_provision": false, 00:13:16.397 "num_allocated_clusters": 38, 00:13:16.397 "snapshot": false, 00:13:16.397 "clone": false, 00:13:16.397 "esnap_clone": false 00:13:16.397 } 00:13:16.397 } 00:13:16.397 } 00:13:16.397 ] 00:13:16.397 14:47:50 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:13:16.397 14:47:50 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 35bf1e5b-21f0-4d96-be5b-5101bad47580 00:13:16.397 14:47:50 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:13:16.656 14:47:50 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:13:16.656 14:47:50 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 35bf1e5b-21f0-4d96-be5b-5101bad47580 00:13:16.656 14:47:50 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:13:16.656 14:47:50 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:13:16.656 14:47:50 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 5ed8d09f-d060-4dde-8c42-c8abafad993a 00:13:16.916 14:47:50 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 35bf1e5b-21f0-4d96-be5b-5101bad47580 00:13:17.174 14:47:50 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:17.174 14:47:51 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:17.174 00:13:17.174 real 0m17.403s 00:13:17.174 user 0m45.546s 00:13:17.174 sys 0m2.829s 00:13:17.174 14:47:51 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:17.174 14:47:51 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:17.174 ************************************ 00:13:17.174 END TEST lvs_grow_dirty 00:13:17.174 ************************************ 00:13:17.433 14:47:51 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:13:17.433 14:47:51 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:13:17.433 14:47:51 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:13:17.433 14:47:51 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:13:17.433 14:47:51 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:13:17.433 14:47:51 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:13:17.433 14:47:51 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:13:17.433 14:47:51 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:13:17.433 14:47:51 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:13:17.433 14:47:51 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:13:17.433 nvmf_trace.0 00:13:17.433 14:47:51 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:13:17.433 14:47:51 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:13:17.433 14:47:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:17.433 14:47:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:13:17.433 14:47:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:13:17.433 14:47:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:13:17.433 14:47:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:13:17.433 14:47:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:17.433 14:47:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:13:17.433 rmmod nvme_rdma 00:13:17.433 rmmod nvme_fabrics 00:13:17.433 14:47:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:17.433 14:47:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:13:17.433 14:47:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:13:17.433 14:47:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 2794777 ']' 00:13:17.433 14:47:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 2794777 00:13:17.433 14:47:51 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 2794777 ']' 00:13:17.434 14:47:51 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 2794777 00:13:17.434 14:47:51 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:13:17.434 14:47:51 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:17.434 14:47:51 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2794777 00:13:17.434 14:47:51 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:17.434 14:47:51 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:17.434 14:47:51 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2794777' 00:13:17.434 killing process with pid 2794777 00:13:17.434 14:47:51 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 2794777 00:13:17.434 14:47:51 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 2794777 00:13:17.691 14:47:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:17.691 14:47:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:13:17.691 00:13:17.691 real 0m40.125s 00:13:17.691 user 1m6.881s 00:13:17.691 sys 0m8.325s 00:13:17.691 14:47:51 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:17.691 14:47:51 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:17.691 ************************************ 00:13:17.691 END TEST nvmf_lvs_grow 00:13:17.691 ************************************ 00:13:17.691 14:47:51 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:13:17.691 14:47:51 nvmf_rdma -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:13:17.691 14:47:51 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:17.691 14:47:51 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:17.691 14:47:51 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:13:17.691 ************************************ 00:13:17.691 START TEST nvmf_bdev_io_wait 00:13:17.691 ************************************ 00:13:17.691 14:47:51 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:13:17.691 * Looking for test storage... 00:13:17.691 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:17.691 14:47:51 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:17.691 14:47:51 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:13:17.691 14:47:51 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:17.691 14:47:51 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:17.691 14:47:51 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:17.691 14:47:51 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:17.691 14:47:51 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:17.691 14:47:51 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:17.691 14:47:51 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:17.691 14:47:51 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:17.691 14:47:51 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:17.691 14:47:51 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:17.691 14:47:51 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:13:17.691 14:47:51 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:13:17.691 14:47:51 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:17.691 14:47:51 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:17.691 14:47:51 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:17.691 14:47:51 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:17.691 14:47:51 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:17.691 14:47:51 nvmf_rdma.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:17.691 14:47:51 nvmf_rdma.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:17.691 14:47:51 nvmf_rdma.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:17.691 14:47:51 nvmf_rdma.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.691 14:47:51 nvmf_rdma.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.691 14:47:51 nvmf_rdma.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.691 14:47:51 nvmf_rdma.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:13:17.691 14:47:51 nvmf_rdma.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.691 14:47:51 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:13:17.692 14:47:51 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:17.692 14:47:51 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:17.692 14:47:51 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:17.692 14:47:51 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:17.692 14:47:51 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:17.692 14:47:51 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:17.692 14:47:51 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:17.692 14:47:51 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:17.692 14:47:51 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:17.692 14:47:51 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:17.692 14:47:51 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:13:17.692 14:47:51 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:13:17.692 14:47:51 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:17.692 14:47:51 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:17.692 14:47:51 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:17.692 14:47:51 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:17.692 14:47:51 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:17.692 14:47:51 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:17.692 14:47:51 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:17.692 14:47:51 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:17.692 14:47:51 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:17.692 14:47:51 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:13:17.692 14:47:51 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:22.959 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:22.959 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:13:22.959 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:22.959 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:22.959 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:22.959 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:22.959 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:22.959 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:13:22.959 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:22.959 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:13:22.960 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:13:22.960 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:13:22.960 Found net devices under 0000:da:00.0: mlx_0_0 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:13:22.960 Found net devices under 0000:da:00.1: mlx_0_1 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # rdma_device_init 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # uname 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@62 -- # modprobe ib_cm 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@63 -- # modprobe ib_core 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@64 -- # modprobe ib_umad 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@66 -- # modprobe iw_cm 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # allocate_nic_ips 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@73 -- # get_rdma_if_list 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # continue 2 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # continue 2 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:13:22.960 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:22.960 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:13:22.960 altname enp218s0f0np0 00:13:22.960 altname ens818f0np0 00:13:22.960 inet 192.168.100.8/24 scope global mlx_0_0 00:13:22.960 valid_lft forever preferred_lft forever 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:13:22.960 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:22.960 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:13:22.960 altname enp218s0f1np1 00:13:22.960 altname ens818f1np1 00:13:22.960 inet 192.168.100.9/24 scope global mlx_0_1 00:13:22.960 valid_lft forever preferred_lft forever 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@86 -- # get_rdma_if_list 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # continue 2 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # continue 2 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:13:22.960 192.168.100.9' 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:13:22.960 192.168.100.9' 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # head -n 1 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:13:22.960 192.168.100.9' 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # head -n 1 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # tail -n +2 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=2798369 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 2798369 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 2798369 ']' 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:22.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:22.960 14:47:56 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:22.960 [2024-07-15 14:47:56.764712] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:13:22.960 [2024-07-15 14:47:56.764762] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:22.960 EAL: No free 2048 kB hugepages reported on node 1 00:13:22.960 [2024-07-15 14:47:56.820762] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:23.219 [2024-07-15 14:47:56.896990] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:23.219 [2024-07-15 14:47:56.897027] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:23.219 [2024-07-15 14:47:56.897033] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:23.219 [2024-07-15 14:47:56.897039] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:23.219 [2024-07-15 14:47:56.897044] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:23.219 [2024-07-15 14:47:56.897087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:23.219 [2024-07-15 14:47:56.897186] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:23.219 [2024-07-15 14:47:56.897264] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:23.219 [2024-07-15 14:47:56.897265] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:23.787 14:47:57 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:23.787 14:47:57 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:13:23.787 14:47:57 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:23.787 14:47:57 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:23.787 14:47:57 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:23.787 14:47:57 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:23.787 14:47:57 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:13:23.787 14:47:57 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.787 14:47:57 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:23.787 14:47:57 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.787 14:47:57 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:13:23.787 14:47:57 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.787 14:47:57 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:23.787 14:47:57 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.787 14:47:57 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:13:23.787 14:47:57 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.787 14:47:57 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:23.787 [2024-07-15 14:47:57.703370] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x555b20/0x55a010) succeed. 00:13:24.046 [2024-07-15 14:47:57.712296] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x557160/0x59b6a0) succeed. 00:13:24.046 14:47:57 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:24.046 14:47:57 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:24.046 14:47:57 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:24.046 14:47:57 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:24.046 Malloc0 00:13:24.046 14:47:57 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:24.046 14:47:57 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:24.046 14:47:57 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:24.046 14:47:57 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:24.046 14:47:57 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:24.046 14:47:57 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:24.046 14:47:57 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:24.046 14:47:57 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:24.046 14:47:57 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:24.046 14:47:57 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:24.046 14:47:57 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:24.046 14:47:57 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:24.046 [2024-07-15 14:47:57.884609] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:24.046 14:47:57 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:24.046 14:47:57 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2798624 00:13:24.046 14:47:57 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:13:24.046 14:47:57 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2798626 00:13:24.046 14:47:57 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:13:24.046 14:47:57 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:13:24.046 14:47:57 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:13:24.046 14:47:57 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:24.046 14:47:57 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:24.046 { 00:13:24.046 "params": { 00:13:24.046 "name": "Nvme$subsystem", 00:13:24.046 "trtype": "$TEST_TRANSPORT", 00:13:24.046 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:24.046 "adrfam": "ipv4", 00:13:24.046 "trsvcid": "$NVMF_PORT", 00:13:24.046 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:24.046 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:24.046 "hdgst": ${hdgst:-false}, 00:13:24.046 "ddgst": ${ddgst:-false} 00:13:24.046 }, 00:13:24.046 "method": "bdev_nvme_attach_controller" 00:13:24.046 } 00:13:24.046 EOF 00:13:24.046 )") 00:13:24.046 14:47:57 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:13:24.046 14:47:57 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2798628 00:13:24.046 14:47:57 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:13:24.046 14:47:57 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:13:24.046 14:47:57 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:13:24.046 14:47:57 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:24.046 14:47:57 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:24.046 { 00:13:24.046 "params": { 00:13:24.046 "name": "Nvme$subsystem", 00:13:24.046 "trtype": "$TEST_TRANSPORT", 00:13:24.046 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:24.046 "adrfam": "ipv4", 00:13:24.046 "trsvcid": "$NVMF_PORT", 00:13:24.046 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:24.046 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:24.046 "hdgst": ${hdgst:-false}, 00:13:24.046 "ddgst": ${ddgst:-false} 00:13:24.046 }, 00:13:24.046 "method": "bdev_nvme_attach_controller" 00:13:24.046 } 00:13:24.046 EOF 00:13:24.046 )") 00:13:24.046 14:47:57 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:13:24.046 14:47:57 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2798631 00:13:24.046 14:47:57 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:13:24.046 14:47:57 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:13:24.046 14:47:57 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:13:24.046 14:47:57 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:13:24.047 14:47:57 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:13:24.047 14:47:57 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:24.047 14:47:57 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:13:24.047 14:47:57 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:13:24.047 14:47:57 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:24.047 { 00:13:24.047 "params": { 00:13:24.047 "name": "Nvme$subsystem", 00:13:24.047 "trtype": "$TEST_TRANSPORT", 00:13:24.047 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:24.047 "adrfam": "ipv4", 00:13:24.047 "trsvcid": "$NVMF_PORT", 00:13:24.047 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:24.047 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:24.047 "hdgst": ${hdgst:-false}, 00:13:24.047 "ddgst": ${ddgst:-false} 00:13:24.047 }, 00:13:24.047 "method": "bdev_nvme_attach_controller" 00:13:24.047 } 00:13:24.047 EOF 00:13:24.047 )") 00:13:24.047 14:47:57 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:13:24.047 14:47:57 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:13:24.047 14:47:57 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:13:24.047 14:47:57 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:24.047 14:47:57 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:24.047 { 00:13:24.047 "params": { 00:13:24.047 "name": "Nvme$subsystem", 00:13:24.047 "trtype": "$TEST_TRANSPORT", 00:13:24.047 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:24.047 "adrfam": "ipv4", 00:13:24.047 "trsvcid": "$NVMF_PORT", 00:13:24.047 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:24.047 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:24.047 "hdgst": ${hdgst:-false}, 00:13:24.047 "ddgst": ${ddgst:-false} 00:13:24.047 }, 00:13:24.047 "method": "bdev_nvme_attach_controller" 00:13:24.047 } 00:13:24.047 EOF 00:13:24.047 )") 00:13:24.047 14:47:57 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:13:24.047 14:47:57 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2798624 00:13:24.047 14:47:57 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:13:24.047 14:47:57 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:13:24.047 14:47:57 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:13:24.047 14:47:57 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:13:24.047 14:47:57 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:24.047 "params": { 00:13:24.047 "name": "Nvme1", 00:13:24.047 "trtype": "rdma", 00:13:24.047 "traddr": "192.168.100.8", 00:13:24.047 "adrfam": "ipv4", 00:13:24.047 "trsvcid": "4420", 00:13:24.047 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:24.047 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:24.047 "hdgst": false, 00:13:24.047 "ddgst": false 00:13:24.047 }, 00:13:24.047 "method": "bdev_nvme_attach_controller" 00:13:24.047 }' 00:13:24.047 14:47:57 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:13:24.047 14:47:57 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:13:24.047 14:47:57 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:13:24.047 14:47:57 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:24.047 "params": { 00:13:24.047 "name": "Nvme1", 00:13:24.047 "trtype": "rdma", 00:13:24.047 "traddr": "192.168.100.8", 00:13:24.047 "adrfam": "ipv4", 00:13:24.047 "trsvcid": "4420", 00:13:24.047 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:24.047 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:24.047 "hdgst": false, 00:13:24.047 "ddgst": false 00:13:24.047 }, 00:13:24.047 "method": "bdev_nvme_attach_controller" 00:13:24.047 }' 00:13:24.047 14:47:57 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:13:24.047 14:47:57 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:24.047 "params": { 00:13:24.047 "name": "Nvme1", 00:13:24.047 "trtype": "rdma", 00:13:24.047 "traddr": "192.168.100.8", 00:13:24.047 "adrfam": "ipv4", 00:13:24.047 "trsvcid": "4420", 00:13:24.047 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:24.047 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:24.047 "hdgst": false, 00:13:24.047 "ddgst": false 00:13:24.047 }, 00:13:24.047 "method": "bdev_nvme_attach_controller" 00:13:24.047 }' 00:13:24.047 14:47:57 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:13:24.047 14:47:57 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:24.047 "params": { 00:13:24.047 "name": "Nvme1", 00:13:24.047 "trtype": "rdma", 00:13:24.047 "traddr": "192.168.100.8", 00:13:24.047 "adrfam": "ipv4", 00:13:24.047 "trsvcid": "4420", 00:13:24.047 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:24.047 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:24.047 "hdgst": false, 00:13:24.047 "ddgst": false 00:13:24.047 }, 00:13:24.047 "method": "bdev_nvme_attach_controller" 00:13:24.047 }' 00:13:24.047 [2024-07-15 14:47:57.934410] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:13:24.047 [2024-07-15 14:47:57.934410] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:13:24.047 [2024-07-15 14:47:57.934461] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-15 14:47:57.934462] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:13:24.047 --proc-type=auto ] 00:13:24.047 [2024-07-15 14:47:57.934959] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:13:24.047 [2024-07-15 14:47:57.934995] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:13:24.047 [2024-07-15 14:47:57.938417] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:13:24.047 [2024-07-15 14:47:57.938461] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:13:24.305 EAL: No free 2048 kB hugepages reported on node 1 00:13:24.305 EAL: No free 2048 kB hugepages reported on node 1 00:13:24.305 [2024-07-15 14:47:58.115128] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:24.305 EAL: No free 2048 kB hugepages reported on node 1 00:13:24.305 [2024-07-15 14:47:58.192329] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:13:24.305 [2024-07-15 14:47:58.210870] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:24.564 EAL: No free 2048 kB hugepages reported on node 1 00:13:24.564 [2024-07-15 14:47:58.284206] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:13:24.564 [2024-07-15 14:47:58.307625] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:24.564 [2024-07-15 14:47:58.360123] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:24.564 [2024-07-15 14:47:58.392651] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:13:24.564 [2024-07-15 14:47:58.437376] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:13:24.822 Running I/O for 1 seconds... 00:13:24.822 Running I/O for 1 seconds... 00:13:24.822 Running I/O for 1 seconds... 00:13:24.822 Running I/O for 1 seconds... 00:13:25.758 00:13:25.758 Latency(us) 00:13:25.758 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:25.758 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:13:25.758 Nvme1n1 : 1.01 17244.66 67.36 0.00 0.00 7399.77 4493.90 14542.75 00:13:25.758 =================================================================================================================== 00:13:25.758 Total : 17244.66 67.36 0.00 0.00 7399.77 4493.90 14542.75 00:13:25.758 00:13:25.758 Latency(us) 00:13:25.758 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:25.758 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:13:25.758 Nvme1n1 : 1.00 17124.79 66.89 0.00 0.00 7452.99 4743.56 18100.42 00:13:25.758 =================================================================================================================== 00:13:25.758 Total : 17124.79 66.89 0.00 0.00 7452.99 4743.56 18100.42 00:13:25.758 00:13:25.758 Latency(us) 00:13:25.758 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:25.758 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:13:25.758 Nvme1n1 : 1.00 15941.74 62.27 0.00 0.00 8009.88 3729.31 18849.40 00:13:25.758 =================================================================================================================== 00:13:25.758 Total : 15941.74 62.27 0.00 0.00 8009.88 3729.31 18849.40 00:13:25.758 00:13:25.758 Latency(us) 00:13:25.758 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:25.758 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:13:25.758 Nvme1n1 : 1.00 255436.56 997.80 0.00 0.00 499.24 204.80 1716.42 00:13:25.758 =================================================================================================================== 00:13:25.758 Total : 255436.56 997.80 0.00 0.00 499.24 204.80 1716.42 00:13:26.019 14:47:59 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2798626 00:13:26.019 14:47:59 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2798628 00:13:26.019 14:47:59 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2798631 00:13:26.019 14:47:59 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:26.019 14:47:59 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:26.019 14:47:59 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:26.019 14:47:59 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.019 14:47:59 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:13:26.019 14:47:59 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:13:26.019 14:47:59 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:26.019 14:47:59 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:13:26.019 14:47:59 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:13:26.019 14:47:59 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:13:26.019 14:47:59 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:13:26.019 14:47:59 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:26.019 14:47:59 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:13:26.019 rmmod nvme_rdma 00:13:26.019 rmmod nvme_fabrics 00:13:26.301 14:47:59 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:26.301 14:47:59 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:13:26.301 14:47:59 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:13:26.301 14:47:59 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 2798369 ']' 00:13:26.301 14:47:59 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 2798369 00:13:26.301 14:47:59 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 2798369 ']' 00:13:26.301 14:47:59 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 2798369 00:13:26.301 14:47:59 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:13:26.301 14:47:59 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:26.301 14:47:59 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2798369 00:13:26.301 14:48:00 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:26.301 14:48:00 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:26.301 14:48:00 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2798369' 00:13:26.301 killing process with pid 2798369 00:13:26.301 14:48:00 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 2798369 00:13:26.301 14:48:00 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 2798369 00:13:26.613 14:48:00 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:26.613 14:48:00 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:13:26.613 00:13:26.613 real 0m8.781s 00:13:26.613 user 0m20.444s 00:13:26.613 sys 0m5.125s 00:13:26.613 14:48:00 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:26.613 14:48:00 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:26.613 ************************************ 00:13:26.613 END TEST nvmf_bdev_io_wait 00:13:26.613 ************************************ 00:13:26.613 14:48:00 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:13:26.613 14:48:00 nvmf_rdma -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:13:26.613 14:48:00 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:26.613 14:48:00 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:26.613 14:48:00 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:13:26.613 ************************************ 00:13:26.613 START TEST nvmf_queue_depth 00:13:26.613 ************************************ 00:13:26.613 14:48:00 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:13:26.613 * Looking for test storage... 00:13:26.613 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:26.613 14:48:00 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:26.613 14:48:00 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:13:26.613 14:48:00 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:26.613 14:48:00 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:26.613 14:48:00 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:26.613 14:48:00 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:26.613 14:48:00 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:26.613 14:48:00 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:26.613 14:48:00 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:26.613 14:48:00 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:26.613 14:48:00 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:26.613 14:48:00 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:26.613 14:48:00 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:13:26.613 14:48:00 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:13:26.613 14:48:00 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:26.613 14:48:00 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:26.613 14:48:00 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:26.613 14:48:00 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:26.613 14:48:00 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:26.613 14:48:00 nvmf_rdma.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:26.613 14:48:00 nvmf_rdma.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:26.613 14:48:00 nvmf_rdma.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:26.613 14:48:00 nvmf_rdma.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.613 14:48:00 nvmf_rdma.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.613 14:48:00 nvmf_rdma.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.613 14:48:00 nvmf_rdma.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:13:26.613 14:48:00 nvmf_rdma.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.613 14:48:00 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:13:26.613 14:48:00 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:26.613 14:48:00 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:26.613 14:48:00 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:26.613 14:48:00 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:26.613 14:48:00 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:26.613 14:48:00 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:26.613 14:48:00 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:26.613 14:48:00 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:26.613 14:48:00 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:13:26.613 14:48:00 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:13:26.613 14:48:00 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:26.613 14:48:00 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:13:26.613 14:48:00 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:13:26.613 14:48:00 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:26.613 14:48:00 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:26.613 14:48:00 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:26.613 14:48:00 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:26.613 14:48:00 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:26.613 14:48:00 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:26.613 14:48:00 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:26.613 14:48:00 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:26.613 14:48:00 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:26.613 14:48:00 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:13:26.613 14:48:00 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:31.898 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:31.898 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:13:31.898 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:31.898 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:31.898 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:31.898 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:31.898 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:31.898 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:13:31.898 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:31.898 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:13:31.898 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:13:31.898 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:13:31.898 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:13:31.898 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:13:31.898 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:13:31.898 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:31.898 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:31.898 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:31.898 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:31.898 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:31.898 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:31.898 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:31.898 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:31.898 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:31.898 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:31.898 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:31.898 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:31.898 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:13:31.898 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:13:31.898 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:13:31.898 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:13:31.898 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:13:31.898 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:31.898 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:31.898 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:13:31.898 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:13:31.898 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:31.898 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:31.898 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:31.898 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:31.898 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:31.898 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:31.898 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:31.898 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:13:31.898 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:13:31.898 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:31.898 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:31.898 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:31.898 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:31.898 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:31.898 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:31.898 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:31.898 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:13:31.898 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:31.898 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:31.898 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:31.898 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:31.898 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:31.898 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:13:31.898 Found net devices under 0000:da:00.0: mlx_0_0 00:13:31.898 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:31.898 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:31.898 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:31.898 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:31.898 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:31.898 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:31.898 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:13:31.898 Found net devices under 0000:da:00.1: mlx_0_1 00:13:31.898 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:31.898 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:31.898 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:13:31.898 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:31.898 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:13:31.898 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:13:31.898 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@420 -- # rdma_device_init 00:13:31.898 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:13:31.898 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@58 -- # uname 00:13:31.898 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:13:31.898 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@62 -- # modprobe ib_cm 00:13:31.898 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@63 -- # modprobe ib_core 00:13:31.898 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@64 -- # modprobe ib_umad 00:13:31.898 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:13:31.898 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@66 -- # modprobe iw_cm 00:13:31.898 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:13:31.898 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:13:31.898 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@502 -- # allocate_nic_ips 00:13:31.898 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:31.898 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@73 -- # get_rdma_if_list 00:13:31.898 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:31.898 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:31.898 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:31.898 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:31.898 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:31.899 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:31.899 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:31.899 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:31.899 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:31.899 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@105 -- # continue 2 00:13:31.899 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:31.899 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:31.899 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:31.899 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:31.899 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:31.899 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:31.899 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@105 -- # continue 2 00:13:31.899 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:31.899 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:13:31.899 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:31.899 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:31.899 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:31.899 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:31.899 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:13:31.899 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:13:31.899 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:13:31.899 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:31.899 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:13:31.899 altname enp218s0f0np0 00:13:31.899 altname ens818f0np0 00:13:31.899 inet 192.168.100.8/24 scope global mlx_0_0 00:13:31.899 valid_lft forever preferred_lft forever 00:13:31.899 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:31.899 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:13:31.899 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:31.899 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:31.899 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:31.899 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:31.899 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:13:31.899 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:13:31.899 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:13:31.899 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:31.899 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:13:31.899 altname enp218s0f1np1 00:13:31.899 altname ens818f1np1 00:13:31.899 inet 192.168.100.9/24 scope global mlx_0_1 00:13:31.899 valid_lft forever preferred_lft forever 00:13:31.899 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:13:31.899 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:31.899 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:31.899 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:13:31.899 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:13:31.899 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@86 -- # get_rdma_if_list 00:13:31.899 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:31.899 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:31.899 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:31.899 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:31.899 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:31.899 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:31.899 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:31.899 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:31.899 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:31.899 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@105 -- # continue 2 00:13:31.899 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:31.899 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:31.899 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:31.899 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:31.899 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:31.899 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:31.899 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@105 -- # continue 2 00:13:31.899 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:31.899 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:13:31.899 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:31.899 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:31.899 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:31.899 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:31.899 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:31.899 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:13:31.899 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:31.899 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:31.899 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:31.899 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:31.899 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:13:31.899 192.168.100.9' 00:13:31.899 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:13:31.899 192.168.100.9' 00:13:31.899 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@457 -- # head -n 1 00:13:31.899 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:31.899 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:13:31.899 192.168.100.9' 00:13:31.899 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@458 -- # tail -n +2 00:13:31.899 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@458 -- # head -n 1 00:13:31.899 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:31.899 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:13:31.899 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:31.899 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:13:31.899 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:13:31.899 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:13:31.899 14:48:05 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:13:31.899 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:31.899 14:48:05 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:31.899 14:48:05 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:31.899 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=2802221 00:13:31.899 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 2802221 00:13:31.899 14:48:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:31.899 14:48:05 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 2802221 ']' 00:13:31.899 14:48:05 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:31.899 14:48:05 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:31.899 14:48:05 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:31.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:31.899 14:48:05 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:31.899 14:48:05 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:31.899 [2024-07-15 14:48:05.581114] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:13:31.899 [2024-07-15 14:48:05.581159] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:31.899 EAL: No free 2048 kB hugepages reported on node 1 00:13:31.899 [2024-07-15 14:48:05.636667] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:31.899 [2024-07-15 14:48:05.715532] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:31.899 [2024-07-15 14:48:05.715570] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:31.899 [2024-07-15 14:48:05.715577] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:31.899 [2024-07-15 14:48:05.715582] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:31.899 [2024-07-15 14:48:05.715587] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:31.899 [2024-07-15 14:48:05.715620] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:32.515 14:48:06 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:32.515 14:48:06 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:13:32.515 14:48:06 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:32.515 14:48:06 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:32.515 14:48:06 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:32.515 14:48:06 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:32.515 14:48:06 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:13:32.515 14:48:06 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:32.515 14:48:06 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:32.774 [2024-07-15 14:48:06.442424] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xbf6c20/0xbfb110) succeed. 00:13:32.774 [2024-07-15 14:48:06.452055] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xbf8120/0xc3c7a0) succeed. 00:13:32.774 14:48:06 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:32.774 14:48:06 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:32.774 14:48:06 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:32.774 14:48:06 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:32.774 Malloc0 00:13:32.774 14:48:06 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:32.774 14:48:06 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:32.774 14:48:06 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:32.774 14:48:06 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:32.774 14:48:06 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:32.774 14:48:06 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:32.774 14:48:06 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:32.774 14:48:06 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:32.774 14:48:06 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:32.774 14:48:06 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:32.774 14:48:06 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:32.774 14:48:06 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:32.774 [2024-07-15 14:48:06.543748] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:32.774 14:48:06 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:32.775 14:48:06 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2802324 00:13:32.775 14:48:06 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:13:32.775 14:48:06 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:32.775 14:48:06 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2802324 /var/tmp/bdevperf.sock 00:13:32.775 14:48:06 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 2802324 ']' 00:13:32.775 14:48:06 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:32.775 14:48:06 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:32.775 14:48:06 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:32.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:32.775 14:48:06 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:32.775 14:48:06 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:32.775 [2024-07-15 14:48:06.591952] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:13:32.775 [2024-07-15 14:48:06.591989] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2802324 ] 00:13:32.775 EAL: No free 2048 kB hugepages reported on node 1 00:13:32.775 [2024-07-15 14:48:06.645077] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:33.032 [2024-07-15 14:48:06.725248] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:33.597 14:48:07 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:33.597 14:48:07 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:13:33.597 14:48:07 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:13:33.597 14:48:07 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.597 14:48:07 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:33.597 NVMe0n1 00:13:33.597 14:48:07 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.597 14:48:07 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:33.854 Running I/O for 10 seconds... 00:13:43.831 00:13:43.832 Latency(us) 00:13:43.832 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:43.832 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:13:43.832 Verification LBA range: start 0x0 length 0x4000 00:13:43.832 NVMe0n1 : 10.05 17628.81 68.86 0.00 0.00 57941.64 22719.15 40195.41 00:13:43.832 =================================================================================================================== 00:13:43.832 Total : 17628.81 68.86 0.00 0.00 57941.64 22719.15 40195.41 00:13:43.832 0 00:13:43.832 14:48:17 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2802324 00:13:43.832 14:48:17 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 2802324 ']' 00:13:43.832 14:48:17 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 2802324 00:13:43.832 14:48:17 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:13:43.832 14:48:17 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:43.832 14:48:17 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2802324 00:13:43.832 14:48:17 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:43.832 14:48:17 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:43.832 14:48:17 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2802324' 00:13:43.832 killing process with pid 2802324 00:13:43.832 14:48:17 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 2802324 00:13:43.832 Received shutdown signal, test time was about 10.000000 seconds 00:13:43.832 00:13:43.832 Latency(us) 00:13:43.832 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:43.832 =================================================================================================================== 00:13:43.832 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:43.832 14:48:17 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 2802324 00:13:44.089 14:48:17 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:44.089 14:48:17 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:13:44.089 14:48:17 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:44.089 14:48:17 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:13:44.089 14:48:17 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:13:44.089 14:48:17 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:13:44.089 14:48:17 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:13:44.089 14:48:17 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:44.089 14:48:17 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:13:44.089 rmmod nvme_rdma 00:13:44.089 rmmod nvme_fabrics 00:13:44.089 14:48:17 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:44.089 14:48:17 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:13:44.089 14:48:17 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:13:44.089 14:48:17 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 2802221 ']' 00:13:44.089 14:48:17 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 2802221 00:13:44.089 14:48:17 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 2802221 ']' 00:13:44.089 14:48:17 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 2802221 00:13:44.089 14:48:17 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:13:44.089 14:48:17 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:44.089 14:48:17 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2802221 00:13:44.089 14:48:17 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:44.089 14:48:17 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:44.089 14:48:17 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2802221' 00:13:44.089 killing process with pid 2802221 00:13:44.089 14:48:17 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 2802221 00:13:44.089 14:48:17 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 2802221 00:13:44.347 14:48:18 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:44.347 14:48:18 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:13:44.347 00:13:44.347 real 0m17.905s 00:13:44.347 user 0m25.791s 00:13:44.347 sys 0m4.378s 00:13:44.347 14:48:18 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:44.347 14:48:18 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:44.347 ************************************ 00:13:44.347 END TEST nvmf_queue_depth 00:13:44.347 ************************************ 00:13:44.347 14:48:18 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:13:44.347 14:48:18 nvmf_rdma -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:13:44.347 14:48:18 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:44.347 14:48:18 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:44.347 14:48:18 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:13:44.605 ************************************ 00:13:44.605 START TEST nvmf_target_multipath 00:13:44.605 ************************************ 00:13:44.605 14:48:18 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:13:44.605 * Looking for test storage... 00:13:44.605 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:44.605 14:48:18 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:44.605 14:48:18 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:13:44.605 14:48:18 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:44.605 14:48:18 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:44.605 14:48:18 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:44.605 14:48:18 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:44.605 14:48:18 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:44.605 14:48:18 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:44.605 14:48:18 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:44.605 14:48:18 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:44.605 14:48:18 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:44.605 14:48:18 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:44.605 14:48:18 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:13:44.605 14:48:18 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:13:44.605 14:48:18 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:44.605 14:48:18 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:44.605 14:48:18 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:44.605 14:48:18 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:44.605 14:48:18 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:44.605 14:48:18 nvmf_rdma.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:44.605 14:48:18 nvmf_rdma.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:44.605 14:48:18 nvmf_rdma.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:44.605 14:48:18 nvmf_rdma.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.605 14:48:18 nvmf_rdma.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.605 14:48:18 nvmf_rdma.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.605 14:48:18 nvmf_rdma.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:13:44.605 14:48:18 nvmf_rdma.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.605 14:48:18 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:13:44.605 14:48:18 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:44.605 14:48:18 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:44.605 14:48:18 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:44.605 14:48:18 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:44.605 14:48:18 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:44.605 14:48:18 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:44.605 14:48:18 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:44.605 14:48:18 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:44.605 14:48:18 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:44.605 14:48:18 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:44.605 14:48:18 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:13:44.605 14:48:18 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:44.605 14:48:18 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:13:44.605 14:48:18 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:13:44.605 14:48:18 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:44.605 14:48:18 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:44.605 14:48:18 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:44.605 14:48:18 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:44.605 14:48:18 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:44.605 14:48:18 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:44.605 14:48:18 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:44.605 14:48:18 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:44.605 14:48:18 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:44.605 14:48:18 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:13:44.605 14:48:18 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:13:49.875 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:13:49.875 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:13:49.875 Found net devices under 0000:da:00.0: mlx_0_0 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:13:49.875 Found net devices under 0000:da:00.1: mlx_0_1 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@420 -- # rdma_device_init 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@58 -- # uname 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@62 -- # modprobe ib_cm 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@63 -- # modprobe ib_core 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@64 -- # modprobe ib_umad 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@66 -- # modprobe iw_cm 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@502 -- # allocate_nic_ips 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@73 -- # get_rdma_if_list 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@105 -- # continue 2 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:49.875 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@105 -- # continue 2 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:13:49.876 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:49.876 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:13:49.876 altname enp218s0f0np0 00:13:49.876 altname ens818f0np0 00:13:49.876 inet 192.168.100.8/24 scope global mlx_0_0 00:13:49.876 valid_lft forever preferred_lft forever 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:13:49.876 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:49.876 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:13:49.876 altname enp218s0f1np1 00:13:49.876 altname ens818f1np1 00:13:49.876 inet 192.168.100.9/24 scope global mlx_0_1 00:13:49.876 valid_lft forever preferred_lft forever 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@86 -- # get_rdma_if_list 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@105 -- # continue 2 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@105 -- # continue 2 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:13:49.876 192.168.100.9' 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:13:49.876 192.168.100.9' 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@457 -- # head -n 1 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:13:49.876 192.168.100.9' 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@458 -- # tail -n +2 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@458 -- # head -n 1 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 192.168.100.9 ']' 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' rdma '!=' tcp ']' 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@52 -- # echo 'run this test only with TCP transport for now' 00:13:49.876 run this test only with TCP transport for now 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@53 -- # nvmftestfini 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:13:49.876 rmmod nvme_rdma 00:13:49.876 rmmod nvme_fabrics 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@54 -- # exit 0 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:13:49.876 00:13:49.876 real 0m5.334s 00:13:49.876 user 0m1.589s 00:13:49.876 sys 0m3.865s 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:49.876 14:48:23 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:13:49.876 ************************************ 00:13:49.876 END TEST nvmf_target_multipath 00:13:49.876 ************************************ 00:13:49.876 14:48:23 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:13:49.876 14:48:23 nvmf_rdma -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:13:49.876 14:48:23 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:49.876 14:48:23 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:49.876 14:48:23 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:13:49.876 ************************************ 00:13:49.876 START TEST nvmf_zcopy 00:13:49.876 ************************************ 00:13:49.876 14:48:23 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:13:49.876 * Looking for test storage... 00:13:49.876 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:49.876 14:48:23 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:49.876 14:48:23 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:13:49.876 14:48:23 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:49.877 14:48:23 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:49.877 14:48:23 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:49.877 14:48:23 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:49.877 14:48:23 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:49.877 14:48:23 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:49.877 14:48:23 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:49.877 14:48:23 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:49.877 14:48:23 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:49.877 14:48:23 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:49.877 14:48:23 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:13:49.877 14:48:23 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:13:49.877 14:48:23 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:49.877 14:48:23 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:49.877 14:48:23 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:49.877 14:48:23 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:49.877 14:48:23 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:49.877 14:48:23 nvmf_rdma.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:49.877 14:48:23 nvmf_rdma.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:49.877 14:48:23 nvmf_rdma.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:49.877 14:48:23 nvmf_rdma.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.877 14:48:23 nvmf_rdma.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.877 14:48:23 nvmf_rdma.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.877 14:48:23 nvmf_rdma.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:13:49.877 14:48:23 nvmf_rdma.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.877 14:48:23 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:13:49.877 14:48:23 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:49.877 14:48:23 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:49.877 14:48:23 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:49.877 14:48:23 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:49.877 14:48:23 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:49.877 14:48:23 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:49.877 14:48:23 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:49.877 14:48:23 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:49.877 14:48:23 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:13:49.877 14:48:23 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:13:49.877 14:48:23 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:49.877 14:48:23 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:49.877 14:48:23 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:49.877 14:48:23 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:49.877 14:48:23 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:50.135 14:48:23 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:50.135 14:48:23 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:50.135 14:48:23 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:50.135 14:48:23 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:50.135 14:48:23 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:13:50.135 14:48:23 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:13:55.404 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:13:55.404 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:13:55.404 Found net devices under 0000:da:00.0: mlx_0_0 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:13:55.404 Found net devices under 0000:da:00.1: mlx_0_1 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@420 -- # rdma_device_init 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@58 -- # uname 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@62 -- # modprobe ib_cm 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@63 -- # modprobe ib_core 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@64 -- # modprobe ib_umad 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@66 -- # modprobe iw_cm 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@502 -- # allocate_nic_ips 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@73 -- # get_rdma_if_list 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@105 -- # continue 2 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@105 -- # continue 2 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:13:55.404 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:55.404 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:13:55.404 altname enp218s0f0np0 00:13:55.404 altname ens818f0np0 00:13:55.404 inet 192.168.100.8/24 scope global mlx_0_0 00:13:55.404 valid_lft forever preferred_lft forever 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:13:55.404 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:55.404 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:13:55.404 altname enp218s0f1np1 00:13:55.404 altname ens818f1np1 00:13:55.404 inet 192.168.100.9/24 scope global mlx_0_1 00:13:55.404 valid_lft forever preferred_lft forever 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:55.404 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:55.405 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:13:55.405 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:13:55.405 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@86 -- # get_rdma_if_list 00:13:55.405 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:55.405 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:55.405 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:55.405 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:55.405 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:55.405 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:55.405 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:55.405 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:55.405 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:55.405 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@105 -- # continue 2 00:13:55.405 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:55.405 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:55.405 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:55.405 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:55.405 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:55.405 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:55.405 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@105 -- # continue 2 00:13:55.405 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:55.405 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:13:55.405 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:55.405 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:55.405 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:55.405 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:55.405 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:55.405 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:13:55.405 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:55.405 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:55.405 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:55.405 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:55.405 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:13:55.405 192.168.100.9' 00:13:55.405 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:13:55.405 192.168.100.9' 00:13:55.405 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@457 -- # head -n 1 00:13:55.405 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:55.405 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:13:55.405 192.168.100.9' 00:13:55.405 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@458 -- # tail -n +2 00:13:55.405 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@458 -- # head -n 1 00:13:55.405 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:55.405 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:13:55.405 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:55.405 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:13:55.405 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:13:55.405 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:13:55.405 14:48:28 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:13:55.405 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:55.405 14:48:28 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:55.405 14:48:28 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:55.405 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=2810736 00:13:55.405 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 2810736 00:13:55.405 14:48:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:55.405 14:48:28 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 2810736 ']' 00:13:55.405 14:48:28 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:55.405 14:48:28 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:55.405 14:48:28 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:55.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:55.405 14:48:28 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:55.405 14:48:28 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:55.405 [2024-07-15 14:48:29.012930] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:13:55.405 [2024-07-15 14:48:29.012978] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:55.405 EAL: No free 2048 kB hugepages reported on node 1 00:13:55.405 [2024-07-15 14:48:29.067416] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:55.405 [2024-07-15 14:48:29.150863] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:55.405 [2024-07-15 14:48:29.150896] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:55.405 [2024-07-15 14:48:29.150902] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:55.405 [2024-07-15 14:48:29.150908] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:55.405 [2024-07-15 14:48:29.150913] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:55.405 [2024-07-15 14:48:29.150929] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:55.981 14:48:29 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:55.981 14:48:29 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:13:55.981 14:48:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:55.981 14:48:29 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:55.981 14:48:29 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:55.981 14:48:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:55.981 14:48:29 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' rdma '!=' tcp ']' 00:13:55.981 14:48:29 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@16 -- # echo 'Unsupported transport: rdma' 00:13:55.981 Unsupported transport: rdma 00:13:55.981 14:48:29 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@17 -- # exit 0 00:13:55.981 14:48:29 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@1 -- # process_shm --id 0 00:13:55.981 14:48:29 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@806 -- # type=--id 00:13:55.981 14:48:29 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@807 -- # id=0 00:13:55.981 14:48:29 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:13:55.981 14:48:29 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:13:55.981 14:48:29 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:13:55.981 14:48:29 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:13:55.981 14:48:29 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@818 -- # for n in $shm_files 00:13:55.981 14:48:29 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:13:55.981 nvmf_trace.0 00:13:55.981 14:48:29 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@821 -- # return 0 00:13:55.981 14:48:29 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@1 -- # nvmftestfini 00:13:55.981 14:48:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:55.981 14:48:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:13:55.981 14:48:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:13:55.981 14:48:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:13:55.981 14:48:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:13:55.981 14:48:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:56.239 14:48:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:13:56.239 rmmod nvme_rdma 00:13:56.239 rmmod nvme_fabrics 00:13:56.239 14:48:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:56.239 14:48:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:13:56.239 14:48:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:13:56.239 14:48:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 2810736 ']' 00:13:56.239 14:48:29 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 2810736 00:13:56.239 14:48:29 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 2810736 ']' 00:13:56.239 14:48:29 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 2810736 00:13:56.239 14:48:29 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:13:56.239 14:48:29 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:56.239 14:48:29 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2810736 00:13:56.239 14:48:29 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:56.239 14:48:29 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:56.239 14:48:29 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2810736' 00:13:56.239 killing process with pid 2810736 00:13:56.239 14:48:29 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 2810736 00:13:56.239 14:48:29 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 2810736 00:13:56.496 14:48:30 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:56.496 14:48:30 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:13:56.496 00:13:56.496 real 0m6.478s 00:13:56.496 user 0m2.896s 00:13:56.496 sys 0m4.204s 00:13:56.496 14:48:30 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:56.496 14:48:30 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:56.496 ************************************ 00:13:56.496 END TEST nvmf_zcopy 00:13:56.496 ************************************ 00:13:56.496 14:48:30 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:13:56.497 14:48:30 nvmf_rdma -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:13:56.497 14:48:30 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:56.497 14:48:30 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:56.497 14:48:30 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:13:56.497 ************************************ 00:13:56.497 START TEST nvmf_nmic 00:13:56.497 ************************************ 00:13:56.497 14:48:30 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:13:56.497 * Looking for test storage... 00:13:56.497 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:56.497 14:48:30 nvmf_rdma.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:56.497 14:48:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:13:56.497 14:48:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:56.497 14:48:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:56.497 14:48:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:56.497 14:48:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:56.497 14:48:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:56.497 14:48:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:56.497 14:48:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:56.497 14:48:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:56.497 14:48:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:56.497 14:48:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:56.497 14:48:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:13:56.497 14:48:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:13:56.497 14:48:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:56.497 14:48:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:56.497 14:48:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:56.497 14:48:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:56.497 14:48:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:56.497 14:48:30 nvmf_rdma.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:56.497 14:48:30 nvmf_rdma.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:56.497 14:48:30 nvmf_rdma.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:56.497 14:48:30 nvmf_rdma.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.497 14:48:30 nvmf_rdma.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.497 14:48:30 nvmf_rdma.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.497 14:48:30 nvmf_rdma.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:13:56.497 14:48:30 nvmf_rdma.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.497 14:48:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:13:56.497 14:48:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:56.497 14:48:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:56.497 14:48:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:56.497 14:48:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:56.497 14:48:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:56.497 14:48:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:56.497 14:48:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:56.497 14:48:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:56.497 14:48:30 nvmf_rdma.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:56.497 14:48:30 nvmf_rdma.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:56.497 14:48:30 nvmf_rdma.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:13:56.497 14:48:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:13:56.497 14:48:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:56.497 14:48:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:56.497 14:48:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:56.497 14:48:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:56.497 14:48:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:56.497 14:48:30 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:56.497 14:48:30 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:56.497 14:48:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:56.497 14:48:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:56.497 14:48:30 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:13:56.497 14:48:30 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:14:01.766 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:14:01.766 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:14:01.766 Found net devices under 0000:da:00.0: mlx_0_0 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:14:01.766 Found net devices under 0000:da:00.1: mlx_0_1 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@420 -- # rdma_device_init 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@58 -- # uname 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@62 -- # modprobe ib_cm 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@63 -- # modprobe ib_core 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@64 -- # modprobe ib_umad 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@66 -- # modprobe iw_cm 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@502 -- # allocate_nic_ips 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@73 -- # get_rdma_if_list 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@105 -- # continue 2 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@105 -- # continue 2 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:14:01.766 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:01.766 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:14:01.766 altname enp218s0f0np0 00:14:01.766 altname ens818f0np0 00:14:01.766 inet 192.168.100.8/24 scope global mlx_0_0 00:14:01.766 valid_lft forever preferred_lft forever 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:14:01.766 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:01.766 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:14:01.766 altname enp218s0f1np1 00:14:01.766 altname ens818f1np1 00:14:01.766 inet 192.168.100.9/24 scope global mlx_0_1 00:14:01.766 valid_lft forever preferred_lft forever 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:14:01.766 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:14:01.767 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@86 -- # get_rdma_if_list 00:14:01.767 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:01.767 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:01.767 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:01.767 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:01.767 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:01.767 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:01.767 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:01.767 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:01.767 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:01.767 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@105 -- # continue 2 00:14:01.767 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:01.767 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:01.767 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:01.767 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:01.767 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:01.767 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:01.767 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@105 -- # continue 2 00:14:01.767 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:01.767 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:14:01.767 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:01.767 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:01.767 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:01.767 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:01.767 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:01.767 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:14:01.767 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:01.767 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:01.767 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:01.767 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:01.767 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:14:01.767 192.168.100.9' 00:14:01.767 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:14:01.767 192.168.100.9' 00:14:01.767 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@457 -- # head -n 1 00:14:01.767 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:01.767 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@458 -- # tail -n +2 00:14:01.767 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:14:01.767 192.168.100.9' 00:14:01.767 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@458 -- # head -n 1 00:14:01.767 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:01.767 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:14:01.767 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:01.767 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:14:01.767 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:14:01.767 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:14:01.767 14:48:35 nvmf_rdma.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:14:01.767 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:01.767 14:48:35 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:01.767 14:48:35 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:01.767 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=2813960 00:14:01.767 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:01.767 14:48:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 2813960 00:14:01.767 14:48:35 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 2813960 ']' 00:14:01.767 14:48:35 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:01.767 14:48:35 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:01.767 14:48:35 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:01.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:01.767 14:48:35 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:01.767 14:48:35 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:01.767 [2024-07-15 14:48:35.666136] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:14:01.767 [2024-07-15 14:48:35.666183] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:02.026 EAL: No free 2048 kB hugepages reported on node 1 00:14:02.026 [2024-07-15 14:48:35.722137] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:02.026 [2024-07-15 14:48:35.796034] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:02.026 [2024-07-15 14:48:35.796076] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:02.026 [2024-07-15 14:48:35.796083] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:02.026 [2024-07-15 14:48:35.796088] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:02.026 [2024-07-15 14:48:35.796093] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:02.026 [2024-07-15 14:48:35.796159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:02.026 [2024-07-15 14:48:35.796256] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:02.026 [2024-07-15 14:48:35.796327] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:02.026 [2024-07-15 14:48:35.796328] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:02.592 14:48:36 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:02.592 14:48:36 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:14:02.592 14:48:36 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:02.592 14:48:36 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:02.592 14:48:36 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:02.592 14:48:36 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:02.592 14:48:36 nvmf_rdma.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:14:02.592 14:48:36 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.592 14:48:36 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:02.851 [2024-07-15 14:48:36.529719] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1747cc0/0x174c1b0) succeed. 00:14:02.851 [2024-07-15 14:48:36.538924] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1749300/0x178d840) succeed. 00:14:02.851 14:48:36 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.851 14:48:36 nvmf_rdma.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:02.851 14:48:36 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.851 14:48:36 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:02.851 Malloc0 00:14:02.851 14:48:36 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.851 14:48:36 nvmf_rdma.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:02.851 14:48:36 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.851 14:48:36 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:02.851 14:48:36 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.851 14:48:36 nvmf_rdma.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:02.851 14:48:36 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.851 14:48:36 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:02.851 14:48:36 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.851 14:48:36 nvmf_rdma.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:02.851 14:48:36 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.851 14:48:36 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:02.851 [2024-07-15 14:48:36.704147] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:02.851 14:48:36 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.851 14:48:36 nvmf_rdma.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:14:02.851 test case1: single bdev can't be used in multiple subsystems 00:14:02.851 14:48:36 nvmf_rdma.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:14:02.851 14:48:36 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.851 14:48:36 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:02.851 14:48:36 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.851 14:48:36 nvmf_rdma.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:14:02.851 14:48:36 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.851 14:48:36 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:02.851 14:48:36 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.851 14:48:36 nvmf_rdma.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:14:02.851 14:48:36 nvmf_rdma.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:14:02.851 14:48:36 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.851 14:48:36 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:02.851 [2024-07-15 14:48:36.727912] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:14:02.851 [2024-07-15 14:48:36.727929] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:14:02.851 [2024-07-15 14:48:36.727936] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:02.851 request: 00:14:02.851 { 00:14:02.851 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:14:02.851 "namespace": { 00:14:02.851 "bdev_name": "Malloc0", 00:14:02.851 "no_auto_visible": false 00:14:02.851 }, 00:14:02.851 "method": "nvmf_subsystem_add_ns", 00:14:02.851 "req_id": 1 00:14:02.851 } 00:14:02.851 Got JSON-RPC error response 00:14:02.851 response: 00:14:02.851 { 00:14:02.851 "code": -32602, 00:14:02.851 "message": "Invalid parameters" 00:14:02.851 } 00:14:02.851 14:48:36 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:14:02.851 14:48:36 nvmf_rdma.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:14:02.851 14:48:36 nvmf_rdma.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:14:02.851 14:48:36 nvmf_rdma.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:14:02.851 Adding namespace failed - expected result. 00:14:02.851 14:48:36 nvmf_rdma.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:14:02.851 test case2: host connect to nvmf target in multiple paths 00:14:02.851 14:48:36 nvmf_rdma.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:14:02.851 14:48:36 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.851 14:48:36 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:02.851 [2024-07-15 14:48:36.739971] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:14:02.851 14:48:36 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.851 14:48:36 nvmf_rdma.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:04.269 14:48:37 nvmf_rdma.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4421 00:14:04.834 14:48:38 nvmf_rdma.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:14:04.834 14:48:38 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:14:04.834 14:48:38 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:04.834 14:48:38 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:04.834 14:48:38 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:14:07.364 14:48:40 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:07.364 14:48:40 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:07.364 14:48:40 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:07.364 14:48:40 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:07.364 14:48:40 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:07.364 14:48:40 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:14:07.364 14:48:40 nvmf_rdma.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:14:07.364 [global] 00:14:07.364 thread=1 00:14:07.364 invalidate=1 00:14:07.364 rw=write 00:14:07.364 time_based=1 00:14:07.364 runtime=1 00:14:07.364 ioengine=libaio 00:14:07.364 direct=1 00:14:07.364 bs=4096 00:14:07.364 iodepth=1 00:14:07.364 norandommap=0 00:14:07.364 numjobs=1 00:14:07.364 00:14:07.364 verify_dump=1 00:14:07.364 verify_backlog=512 00:14:07.364 verify_state_save=0 00:14:07.364 do_verify=1 00:14:07.364 verify=crc32c-intel 00:14:07.364 [job0] 00:14:07.364 filename=/dev/nvme0n1 00:14:07.364 Could not set queue depth (nvme0n1) 00:14:07.364 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:07.364 fio-3.35 00:14:07.364 Starting 1 thread 00:14:08.300 00:14:08.300 job0: (groupid=0, jobs=1): err= 0: pid=2815035: Mon Jul 15 14:48:42 2024 00:14:08.300 read: IOPS=7160, BW=28.0MiB/s (29.3MB/s)(28.0MiB/1001msec) 00:14:08.300 slat (nsec): min=5884, max=30268, avg=6934.48, stdev=867.51 00:14:08.300 clat (usec): min=42, max=140, avg=59.04, stdev= 3.93 00:14:08.300 lat (usec): min=56, max=147, avg=65.97, stdev= 4.02 00:14:08.300 clat percentiles (usec): 00:14:08.300 | 1.00th=[ 52], 5.00th=[ 54], 10.00th=[ 55], 20.00th=[ 56], 00:14:08.300 | 30.00th=[ 57], 40.00th=[ 58], 50.00th=[ 60], 60.00th=[ 60], 00:14:08.301 | 70.00th=[ 62], 80.00th=[ 63], 90.00th=[ 64], 95.00th=[ 66], 00:14:08.301 | 99.00th=[ 69], 99.50th=[ 71], 99.90th=[ 79], 99.95th=[ 88], 00:14:08.301 | 99.99th=[ 141] 00:14:08.301 write: IOPS=7596, BW=29.7MiB/s (31.1MB/s)(29.7MiB/1001msec); 0 zone resets 00:14:08.301 slat (nsec): min=7487, max=42923, avg=8849.71, stdev=1006.76 00:14:08.301 clat (usec): min=37, max=101, avg=56.58, stdev= 3.89 00:14:08.301 lat (usec): min=55, max=144, avg=65.43, stdev= 4.02 00:14:08.301 clat percentiles (usec): 00:14:08.301 | 1.00th=[ 50], 5.00th=[ 51], 10.00th=[ 52], 20.00th=[ 53], 00:14:08.301 | 30.00th=[ 55], 40.00th=[ 56], 50.00th=[ 57], 60.00th=[ 58], 00:14:08.301 | 70.00th=[ 59], 80.00th=[ 60], 90.00th=[ 62], 95.00th=[ 64], 00:14:08.301 | 99.00th=[ 67], 99.50th=[ 68], 99.90th=[ 73], 99.95th=[ 79], 00:14:08.301 | 99.99th=[ 101] 00:14:08.301 bw ( KiB/s): min=30568, max=30568, per=100.00%, avg=30568.00, stdev= 0.00, samples=1 00:14:08.301 iops : min= 7642, max= 7642, avg=7642.00, stdev= 0.00, samples=1 00:14:08.301 lat (usec) : 50=1.06%, 100=98.93%, 250=0.01% 00:14:08.301 cpu : usr=8.30%, sys=14.40%, ctx=14772, majf=0, minf=2 00:14:08.301 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:08.301 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:08.301 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:08.301 issued rwts: total=7168,7604,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:08.301 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:08.301 00:14:08.301 Run status group 0 (all jobs): 00:14:08.301 READ: bw=28.0MiB/s (29.3MB/s), 28.0MiB/s-28.0MiB/s (29.3MB/s-29.3MB/s), io=28.0MiB (29.4MB), run=1001-1001msec 00:14:08.301 WRITE: bw=29.7MiB/s (31.1MB/s), 29.7MiB/s-29.7MiB/s (31.1MB/s-31.1MB/s), io=29.7MiB (31.1MB), run=1001-1001msec 00:14:08.301 00:14:08.301 Disk stats (read/write): 00:14:08.301 nvme0n1: ios=6666/6656, merge=0/0, ticks=353/322, in_queue=675, util=90.68% 00:14:08.301 14:48:42 nvmf_rdma.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:10.201 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:14:10.201 14:48:44 nvmf_rdma.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:10.201 14:48:44 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:14:10.201 14:48:44 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:10.201 14:48:44 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:10.201 14:48:44 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:10.201 14:48:44 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:10.201 14:48:44 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:14:10.201 14:48:44 nvmf_rdma.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:14:10.201 14:48:44 nvmf_rdma.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:14:10.201 14:48:44 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:10.201 14:48:44 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:14:10.201 14:48:44 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:14:10.201 14:48:44 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:14:10.201 14:48:44 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:14:10.201 14:48:44 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:10.201 14:48:44 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:14:10.201 rmmod nvme_rdma 00:14:10.460 rmmod nvme_fabrics 00:14:10.460 14:48:44 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:10.460 14:48:44 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:14:10.460 14:48:44 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:14:10.460 14:48:44 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 2813960 ']' 00:14:10.460 14:48:44 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 2813960 00:14:10.460 14:48:44 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 2813960 ']' 00:14:10.460 14:48:44 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 2813960 00:14:10.460 14:48:44 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:14:10.460 14:48:44 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:10.460 14:48:44 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2813960 00:14:10.460 14:48:44 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:10.460 14:48:44 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:10.460 14:48:44 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2813960' 00:14:10.460 killing process with pid 2813960 00:14:10.460 14:48:44 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 2813960 00:14:10.460 14:48:44 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 2813960 00:14:10.719 14:48:44 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:10.719 14:48:44 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:14:10.719 00:14:10.719 real 0m14.254s 00:14:10.719 user 0m41.775s 00:14:10.719 sys 0m4.781s 00:14:10.719 14:48:44 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:10.719 14:48:44 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:10.719 ************************************ 00:14:10.719 END TEST nvmf_nmic 00:14:10.719 ************************************ 00:14:10.719 14:48:44 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:14:10.719 14:48:44 nvmf_rdma -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:14:10.719 14:48:44 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:10.719 14:48:44 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:10.719 14:48:44 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:14:10.719 ************************************ 00:14:10.719 START TEST nvmf_fio_target 00:14:10.719 ************************************ 00:14:10.719 14:48:44 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:14:10.719 * Looking for test storage... 00:14:10.719 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:10.719 14:48:44 nvmf_rdma.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:10.719 14:48:44 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:14:10.719 14:48:44 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:10.719 14:48:44 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:10.719 14:48:44 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:10.719 14:48:44 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:10.719 14:48:44 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:10.719 14:48:44 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:10.719 14:48:44 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:10.719 14:48:44 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:10.719 14:48:44 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:10.719 14:48:44 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:10.719 14:48:44 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:14:10.719 14:48:44 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:14:10.719 14:48:44 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:10.720 14:48:44 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:10.720 14:48:44 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:10.720 14:48:44 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:10.720 14:48:44 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:10.720 14:48:44 nvmf_rdma.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:10.720 14:48:44 nvmf_rdma.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:10.720 14:48:44 nvmf_rdma.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:10.720 14:48:44 nvmf_rdma.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.720 14:48:44 nvmf_rdma.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.720 14:48:44 nvmf_rdma.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.720 14:48:44 nvmf_rdma.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:14:10.720 14:48:44 nvmf_rdma.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.720 14:48:44 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:14:10.720 14:48:44 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:10.720 14:48:44 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:10.720 14:48:44 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:10.979 14:48:44 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:10.979 14:48:44 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:10.979 14:48:44 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:10.979 14:48:44 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:10.979 14:48:44 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:10.979 14:48:44 nvmf_rdma.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:10.979 14:48:44 nvmf_rdma.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:10.979 14:48:44 nvmf_rdma.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:14:10.979 14:48:44 nvmf_rdma.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:14:10.979 14:48:44 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:14:10.979 14:48:44 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:10.979 14:48:44 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:10.979 14:48:44 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:10.979 14:48:44 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:10.979 14:48:44 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:10.979 14:48:44 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:10.979 14:48:44 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:10.979 14:48:44 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:10.979 14:48:44 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:10.979 14:48:44 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:14:10.979 14:48:44 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.247 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:16.247 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:14:16.247 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:16.247 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:16.247 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:16.247 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:16.247 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:16.247 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:14:16.247 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:16.247 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:14:16.247 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:14:16.247 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:14:16.247 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:14:16.247 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:14:16.247 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:14:16.247 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:16.247 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:16.247 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:16.247 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:16.247 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:16.247 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:14:16.248 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:14:16.248 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:14:16.248 Found net devices under 0000:da:00.0: mlx_0_0 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:14:16.248 Found net devices under 0000:da:00.1: mlx_0_1 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@420 -- # rdma_device_init 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@58 -- # uname 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@62 -- # modprobe ib_cm 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@63 -- # modprobe ib_core 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@64 -- # modprobe ib_umad 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@66 -- # modprobe iw_cm 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@502 -- # allocate_nic_ips 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@73 -- # get_rdma_if_list 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@105 -- # continue 2 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@105 -- # continue 2 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:14:16.248 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:16.248 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:14:16.248 altname enp218s0f0np0 00:14:16.248 altname ens818f0np0 00:14:16.248 inet 192.168.100.8/24 scope global mlx_0_0 00:14:16.248 valid_lft forever preferred_lft forever 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:14:16.248 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:16.248 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:14:16.248 altname enp218s0f1np1 00:14:16.248 altname ens818f1np1 00:14:16.248 inet 192.168.100.9/24 scope global mlx_0_1 00:14:16.248 valid_lft forever preferred_lft forever 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@86 -- # get_rdma_if_list 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@105 -- # continue 2 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@105 -- # continue 2 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:16.248 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:16.249 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:16.249 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:16.249 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:14:16.249 192.168.100.9' 00:14:16.249 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@457 -- # head -n 1 00:14:16.249 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:14:16.249 192.168.100.9' 00:14:16.249 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:16.249 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:14:16.249 192.168.100.9' 00:14:16.249 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@458 -- # tail -n +2 00:14:16.249 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@458 -- # head -n 1 00:14:16.249 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:16.249 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:14:16.249 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:16.249 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:14:16.249 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:14:16.249 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:14:16.249 14:48:49 nvmf_rdma.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:14:16.249 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:16.249 14:48:49 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:16.249 14:48:49 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.249 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=2818558 00:14:16.249 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 2818558 00:14:16.249 14:48:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:16.249 14:48:49 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 2818558 ']' 00:14:16.249 14:48:49 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:16.249 14:48:49 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:16.249 14:48:49 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:16.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:16.249 14:48:49 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:16.249 14:48:49 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.249 [2024-07-15 14:48:49.613072] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:14:16.249 [2024-07-15 14:48:49.613120] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:16.249 EAL: No free 2048 kB hugepages reported on node 1 00:14:16.249 [2024-07-15 14:48:49.668553] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:16.249 [2024-07-15 14:48:49.742322] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:16.249 [2024-07-15 14:48:49.742361] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:16.249 [2024-07-15 14:48:49.742368] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:16.249 [2024-07-15 14:48:49.742374] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:16.249 [2024-07-15 14:48:49.742379] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:16.249 [2024-07-15 14:48:49.742441] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:16.249 [2024-07-15 14:48:49.742535] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:16.249 [2024-07-15 14:48:49.742642] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:16.249 [2024-07-15 14:48:49.742645] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:16.505 14:48:50 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:16.505 14:48:50 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:14:16.505 14:48:50 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:16.505 14:48:50 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:16.505 14:48:50 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.762 14:48:50 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:16.762 14:48:50 nvmf_rdma.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:14:16.762 [2024-07-15 14:48:50.622096] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1b97cc0/0x1b9c1b0) succeed. 00:14:16.762 [2024-07-15 14:48:50.631307] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1b99300/0x1bdd840) succeed. 00:14:17.020 14:48:50 nvmf_rdma.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:17.278 14:48:50 nvmf_rdma.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:14:17.278 14:48:50 nvmf_rdma.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:17.278 14:48:51 nvmf_rdma.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:14:17.278 14:48:51 nvmf_rdma.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:17.535 14:48:51 nvmf_rdma.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:14:17.535 14:48:51 nvmf_rdma.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:17.794 14:48:51 nvmf_rdma.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:14:17.794 14:48:51 nvmf_rdma.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:14:18.052 14:48:51 nvmf_rdma.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:18.052 14:48:51 nvmf_rdma.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:14:18.052 14:48:51 nvmf_rdma.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:18.310 14:48:52 nvmf_rdma.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:14:18.310 14:48:52 nvmf_rdma.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:18.568 14:48:52 nvmf_rdma.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:14:18.568 14:48:52 nvmf_rdma.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:14:18.825 14:48:52 nvmf_rdma.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:18.825 14:48:52 nvmf_rdma.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:14:18.825 14:48:52 nvmf_rdma.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:19.083 14:48:52 nvmf_rdma.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:14:19.083 14:48:52 nvmf_rdma.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:19.341 14:48:53 nvmf_rdma.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:19.341 [2024-07-15 14:48:53.194184] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:19.341 14:48:53 nvmf_rdma.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:14:19.599 14:48:53 nvmf_rdma.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:14:19.856 14:48:53 nvmf_rdma.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:20.801 14:48:54 nvmf_rdma.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:14:20.801 14:48:54 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:14:20.801 14:48:54 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:20.801 14:48:54 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:14:20.801 14:48:54 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:14:20.801 14:48:54 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:14:22.702 14:48:56 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:22.702 14:48:56 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:22.702 14:48:56 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:22.702 14:48:56 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:14:22.703 14:48:56 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:22.703 14:48:56 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:14:22.703 14:48:56 nvmf_rdma.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:14:22.703 [global] 00:14:22.703 thread=1 00:14:22.703 invalidate=1 00:14:22.703 rw=write 00:14:22.703 time_based=1 00:14:22.703 runtime=1 00:14:22.703 ioengine=libaio 00:14:22.703 direct=1 00:14:22.703 bs=4096 00:14:22.703 iodepth=1 00:14:22.703 norandommap=0 00:14:22.703 numjobs=1 00:14:22.703 00:14:22.703 verify_dump=1 00:14:22.703 verify_backlog=512 00:14:22.703 verify_state_save=0 00:14:22.703 do_verify=1 00:14:22.703 verify=crc32c-intel 00:14:22.703 [job0] 00:14:22.703 filename=/dev/nvme0n1 00:14:22.703 [job1] 00:14:22.703 filename=/dev/nvme0n2 00:14:22.703 [job2] 00:14:22.703 filename=/dev/nvme0n3 00:14:22.703 [job3] 00:14:22.703 filename=/dev/nvme0n4 00:14:22.960 Could not set queue depth (nvme0n1) 00:14:22.960 Could not set queue depth (nvme0n2) 00:14:22.960 Could not set queue depth (nvme0n3) 00:14:22.960 Could not set queue depth (nvme0n4) 00:14:23.235 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:23.235 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:23.235 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:23.235 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:23.235 fio-3.35 00:14:23.235 Starting 4 threads 00:14:24.259 00:14:24.259 job0: (groupid=0, jobs=1): err= 0: pid=2819989: Mon Jul 15 14:48:58 2024 00:14:24.259 read: IOPS=4215, BW=16.5MiB/s (17.3MB/s)(16.5MiB/1001msec) 00:14:24.259 slat (nsec): min=5982, max=29261, avg=6956.91, stdev=1075.02 00:14:24.259 clat (usec): min=66, max=263, avg=105.40, stdev=26.89 00:14:24.259 lat (usec): min=73, max=269, avg=112.36, stdev=27.38 00:14:24.259 clat percentiles (usec): 00:14:24.259 | 1.00th=[ 72], 5.00th=[ 75], 10.00th=[ 77], 20.00th=[ 80], 00:14:24.259 | 30.00th=[ 83], 40.00th=[ 87], 50.00th=[ 95], 60.00th=[ 119], 00:14:24.259 | 70.00th=[ 124], 80.00th=[ 131], 90.00th=[ 141], 95.00th=[ 151], 00:14:24.259 | 99.00th=[ 169], 99.50th=[ 180], 99.90th=[ 194], 99.95th=[ 200], 00:14:24.259 | 99.99th=[ 265] 00:14:24.259 write: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec); 0 zone resets 00:14:24.259 slat (nsec): min=8143, max=67573, avg=9340.91, stdev=1563.01 00:14:24.259 clat (usec): min=58, max=200, avg=100.06, stdev=27.73 00:14:24.259 lat (usec): min=73, max=211, avg=109.40, stdev=27.92 00:14:24.259 clat percentiles (usec): 00:14:24.259 | 1.00th=[ 69], 5.00th=[ 71], 10.00th=[ 73], 20.00th=[ 76], 00:14:24.259 | 30.00th=[ 78], 40.00th=[ 81], 50.00th=[ 86], 60.00th=[ 113], 00:14:24.259 | 70.00th=[ 121], 80.00th=[ 127], 90.00th=[ 139], 95.00th=[ 147], 00:14:24.259 | 99.00th=[ 174], 99.50th=[ 182], 99.90th=[ 190], 99.95th=[ 194], 00:14:24.259 | 99.99th=[ 200] 00:14:24.259 bw ( KiB/s): min=21312, max=21312, per=28.15%, avg=21312.00, stdev= 0.00, samples=1 00:14:24.259 iops : min= 5328, max= 5328, avg=5328.00, stdev= 0.00, samples=1 00:14:24.259 lat (usec) : 100=54.27%, 250=45.72%, 500=0.01% 00:14:24.259 cpu : usr=5.50%, sys=8.00%, ctx=8829, majf=0, minf=2 00:14:24.259 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:24.259 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:24.259 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:24.259 issued rwts: total=4220,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:24.259 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:24.259 job1: (groupid=0, jobs=1): err= 0: pid=2820004: Mon Jul 15 14:48:58 2024 00:14:24.259 read: IOPS=4929, BW=19.3MiB/s (20.2MB/s)(19.3MiB/1001msec) 00:14:24.259 slat (nsec): min=6361, max=25105, avg=7008.12, stdev=698.66 00:14:24.259 clat (usec): min=64, max=209, avg=92.76, stdev=23.89 00:14:24.259 lat (usec): min=71, max=216, avg=99.77, stdev=23.97 00:14:24.259 clat percentiles (usec): 00:14:24.259 | 1.00th=[ 69], 5.00th=[ 73], 10.00th=[ 74], 20.00th=[ 77], 00:14:24.259 | 30.00th=[ 78], 40.00th=[ 80], 50.00th=[ 82], 60.00th=[ 85], 00:14:24.259 | 70.00th=[ 92], 80.00th=[ 119], 90.00th=[ 129], 95.00th=[ 145], 00:14:24.259 | 99.00th=[ 159], 99.50th=[ 169], 99.90th=[ 192], 99.95th=[ 204], 00:14:24.259 | 99.99th=[ 210] 00:14:24.259 write: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec); 0 zone resets 00:14:24.259 slat (nsec): min=7930, max=35018, avg=8777.18, stdev=848.24 00:14:24.259 clat (usec): min=61, max=353, avg=86.64, stdev=23.61 00:14:24.259 lat (usec): min=69, max=362, avg=95.42, stdev=23.75 00:14:24.259 clat percentiles (usec): 00:14:24.259 | 1.00th=[ 65], 5.00th=[ 69], 10.00th=[ 70], 20.00th=[ 72], 00:14:24.259 | 30.00th=[ 74], 40.00th=[ 76], 50.00th=[ 78], 60.00th=[ 80], 00:14:24.260 | 70.00th=[ 83], 80.00th=[ 101], 90.00th=[ 129], 95.00th=[ 143], 00:14:24.260 | 99.00th=[ 155], 99.50th=[ 161], 99.90th=[ 180], 99.95th=[ 190], 00:14:24.260 | 99.99th=[ 355] 00:14:24.260 bw ( KiB/s): min=24576, max=24576, per=32.46%, avg=24576.00, stdev= 0.00, samples=1 00:14:24.260 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=1 00:14:24.260 lat (usec) : 100=76.34%, 250=23.65%, 500=0.01% 00:14:24.260 cpu : usr=5.60%, sys=10.60%, ctx=10054, majf=0, minf=1 00:14:24.260 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:24.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:24.260 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:24.260 issued rwts: total=4934,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:24.260 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:24.260 job2: (groupid=0, jobs=1): err= 0: pid=2820021: Mon Jul 15 14:48:58 2024 00:14:24.260 read: IOPS=4597, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1001msec) 00:14:24.260 slat (nsec): min=6516, max=29375, avg=7328.97, stdev=850.74 00:14:24.260 clat (usec): min=71, max=199, avg=102.16, stdev=21.14 00:14:24.260 lat (usec): min=79, max=207, avg=109.48, stdev=21.25 00:14:24.260 clat percentiles (usec): 00:14:24.260 | 1.00th=[ 79], 5.00th=[ 82], 10.00th=[ 84], 20.00th=[ 86], 00:14:24.260 | 30.00th=[ 88], 40.00th=[ 91], 50.00th=[ 94], 60.00th=[ 98], 00:14:24.260 | 70.00th=[ 112], 80.00th=[ 121], 90.00th=[ 137], 95.00th=[ 147], 00:14:24.260 | 99.00th=[ 163], 99.50th=[ 174], 99.90th=[ 190], 99.95th=[ 196], 00:14:24.260 | 99.99th=[ 200] 00:14:24.260 write: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec); 0 zone resets 00:14:24.260 slat (nsec): min=8220, max=41460, avg=9182.47, stdev=1004.09 00:14:24.260 clat (usec): min=68, max=351, avg=94.59, stdev=19.69 00:14:24.260 lat (usec): min=77, max=360, avg=103.78, stdev=19.83 00:14:24.260 clat percentiles (usec): 00:14:24.260 | 1.00th=[ 74], 5.00th=[ 77], 10.00th=[ 79], 20.00th=[ 81], 00:14:24.260 | 30.00th=[ 83], 40.00th=[ 85], 50.00th=[ 88], 60.00th=[ 90], 00:14:24.260 | 70.00th=[ 95], 80.00th=[ 110], 90.00th=[ 125], 95.00th=[ 141], 00:14:24.260 | 99.00th=[ 153], 99.50th=[ 157], 99.90th=[ 182], 99.95th=[ 190], 00:14:24.260 | 99.99th=[ 351] 00:14:24.260 bw ( KiB/s): min=20480, max=20480, per=27.05%, avg=20480.00, stdev= 0.00, samples=1 00:14:24.260 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:14:24.260 lat (usec) : 100=69.40%, 250=30.59%, 500=0.01% 00:14:24.260 cpu : usr=5.60%, sys=9.70%, ctx=9210, majf=0, minf=1 00:14:24.260 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:24.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:24.260 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:24.260 issued rwts: total=4602,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:24.260 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:24.260 job3: (groupid=0, jobs=1): err= 0: pid=2820026: Mon Jul 15 14:48:58 2024 00:14:24.260 read: IOPS=4240, BW=16.6MiB/s (17.4MB/s)(16.6MiB/1001msec) 00:14:24.260 slat (nsec): min=6181, max=28780, avg=7360.03, stdev=850.60 00:14:24.260 clat (usec): min=69, max=173, avg=104.80, stdev=17.79 00:14:24.260 lat (usec): min=77, max=181, avg=112.16, stdev=17.80 00:14:24.260 clat percentiles (usec): 00:14:24.260 | 1.00th=[ 77], 5.00th=[ 82], 10.00th=[ 84], 20.00th=[ 89], 00:14:24.260 | 30.00th=[ 92], 40.00th=[ 96], 50.00th=[ 101], 60.00th=[ 110], 00:14:24.260 | 70.00th=[ 118], 80.00th=[ 123], 90.00th=[ 129], 95.00th=[ 135], 00:14:24.260 | 99.00th=[ 147], 99.50th=[ 155], 99.90th=[ 167], 99.95th=[ 172], 00:14:24.260 | 99.99th=[ 174] 00:14:24.260 write: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec); 0 zone resets 00:14:24.260 slat (nsec): min=8273, max=35609, avg=9207.57, stdev=969.00 00:14:24.260 clat (usec): min=66, max=243, avg=100.65, stdev=19.74 00:14:24.260 lat (usec): min=75, max=252, avg=109.86, stdev=19.87 00:14:24.260 clat percentiles (usec): 00:14:24.260 | 1.00th=[ 74], 5.00th=[ 78], 10.00th=[ 80], 20.00th=[ 84], 00:14:24.260 | 30.00th=[ 87], 40.00th=[ 90], 50.00th=[ 94], 60.00th=[ 101], 00:14:24.260 | 70.00th=[ 115], 80.00th=[ 121], 90.00th=[ 128], 95.00th=[ 135], 00:14:24.260 | 99.00th=[ 155], 99.50th=[ 163], 99.90th=[ 178], 99.95th=[ 184], 00:14:24.260 | 99.99th=[ 243] 00:14:24.260 bw ( KiB/s): min=20480, max=20480, per=27.05%, avg=20480.00, stdev= 0.00, samples=1 00:14:24.260 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:14:24.260 lat (usec) : 100=54.44%, 250=45.56% 00:14:24.260 cpu : usr=4.60%, sys=10.00%, ctx=8853, majf=0, minf=1 00:14:24.260 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:24.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:24.260 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:24.260 issued rwts: total=4245,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:24.260 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:24.260 00:14:24.260 Run status group 0 (all jobs): 00:14:24.260 READ: bw=70.2MiB/s (73.7MB/s), 16.5MiB/s-19.3MiB/s (17.3MB/s-20.2MB/s), io=70.3MiB (73.7MB), run=1001-1001msec 00:14:24.260 WRITE: bw=73.9MiB/s (77.5MB/s), 18.0MiB/s-20.0MiB/s (18.9MB/s-20.9MB/s), io=74.0MiB (77.6MB), run=1001-1001msec 00:14:24.260 00:14:24.260 Disk stats (read/write): 00:14:24.260 nvme0n1: ios=3860/4096, merge=0/0, ticks=382/368, in_queue=750, util=86.67% 00:14:24.260 nvme0n2: ios=4505/4608, merge=0/0, ticks=374/327, in_queue=701, util=87.02% 00:14:24.260 nvme0n3: ios=4096/4179, merge=0/0, ticks=369/340, in_queue=709, util=89.10% 00:14:24.260 nvme0n4: ios=3584/3720, merge=0/0, ticks=366/361, in_queue=727, util=89.75% 00:14:24.260 14:48:58 nvmf_rdma.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:14:24.260 [global] 00:14:24.260 thread=1 00:14:24.260 invalidate=1 00:14:24.260 rw=randwrite 00:14:24.260 time_based=1 00:14:24.260 runtime=1 00:14:24.260 ioengine=libaio 00:14:24.260 direct=1 00:14:24.260 bs=4096 00:14:24.260 iodepth=1 00:14:24.260 norandommap=0 00:14:24.260 numjobs=1 00:14:24.260 00:14:24.260 verify_dump=1 00:14:24.260 verify_backlog=512 00:14:24.260 verify_state_save=0 00:14:24.260 do_verify=1 00:14:24.260 verify=crc32c-intel 00:14:24.260 [job0] 00:14:24.260 filename=/dev/nvme0n1 00:14:24.260 [job1] 00:14:24.260 filename=/dev/nvme0n2 00:14:24.260 [job2] 00:14:24.260 filename=/dev/nvme0n3 00:14:24.260 [job3] 00:14:24.260 filename=/dev/nvme0n4 00:14:24.519 Could not set queue depth (nvme0n1) 00:14:24.519 Could not set queue depth (nvme0n2) 00:14:24.519 Could not set queue depth (nvme0n3) 00:14:24.519 Could not set queue depth (nvme0n4) 00:14:24.519 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:24.519 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:24.519 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:24.519 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:24.519 fio-3.35 00:14:24.519 Starting 4 threads 00:14:25.893 00:14:25.893 job0: (groupid=0, jobs=1): err= 0: pid=2820427: Mon Jul 15 14:48:59 2024 00:14:25.893 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:14:25.893 slat (nsec): min=6350, max=16019, avg=7149.74, stdev=638.75 00:14:25.893 clat (usec): min=66, max=343, avg=126.76, stdev=22.49 00:14:25.893 lat (usec): min=73, max=351, avg=133.91, stdev=22.52 00:14:25.893 clat percentiles (usec): 00:14:25.893 | 1.00th=[ 75], 5.00th=[ 87], 10.00th=[ 93], 20.00th=[ 114], 00:14:25.893 | 30.00th=[ 120], 40.00th=[ 124], 50.00th=[ 128], 60.00th=[ 131], 00:14:25.893 | 70.00th=[ 135], 80.00th=[ 141], 90.00th=[ 157], 95.00th=[ 165], 00:14:25.893 | 99.00th=[ 178], 99.50th=[ 182], 99.90th=[ 190], 99.95th=[ 306], 00:14:25.893 | 99.99th=[ 343] 00:14:25.893 write: IOPS=3964, BW=15.5MiB/s (16.2MB/s)(15.5MiB/1001msec); 0 zone resets 00:14:25.893 slat (nsec): min=7863, max=66343, avg=8788.04, stdev=1178.64 00:14:25.893 clat (usec): min=61, max=188, avg=118.21, stdev=21.53 00:14:25.893 lat (usec): min=70, max=209, avg=127.00, stdev=21.60 00:14:25.893 clat percentiles (usec): 00:14:25.893 | 1.00th=[ 71], 5.00th=[ 80], 10.00th=[ 86], 20.00th=[ 105], 00:14:25.893 | 30.00th=[ 112], 40.00th=[ 116], 50.00th=[ 119], 60.00th=[ 122], 00:14:25.893 | 70.00th=[ 126], 80.00th=[ 133], 90.00th=[ 149], 95.00th=[ 157], 00:14:25.893 | 99.00th=[ 169], 99.50th=[ 172], 99.90th=[ 182], 99.95th=[ 188], 00:14:25.893 | 99.99th=[ 190] 00:14:25.893 bw ( KiB/s): min=16384, max=16384, per=22.56%, avg=16384.00, stdev= 0.00, samples=1 00:14:25.893 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:14:25.894 lat (usec) : 100=15.28%, 250=84.69%, 500=0.03% 00:14:25.894 cpu : usr=5.20%, sys=7.40%, ctx=7553, majf=0, minf=1 00:14:25.894 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:25.894 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:25.894 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:25.894 issued rwts: total=3584,3968,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:25.894 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:25.894 job1: (groupid=0, jobs=1): err= 0: pid=2820446: Mon Jul 15 14:48:59 2024 00:14:25.894 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:14:25.894 slat (nsec): min=6319, max=15707, avg=7064.93, stdev=757.73 00:14:25.894 clat (usec): min=66, max=354, avg=126.96, stdev=23.18 00:14:25.894 lat (usec): min=72, max=361, avg=134.03, stdev=23.19 00:14:25.894 clat percentiles (usec): 00:14:25.894 | 1.00th=[ 75], 5.00th=[ 86], 10.00th=[ 92], 20.00th=[ 114], 00:14:25.894 | 30.00th=[ 120], 40.00th=[ 124], 50.00th=[ 128], 60.00th=[ 133], 00:14:25.894 | 70.00th=[ 135], 80.00th=[ 141], 90.00th=[ 159], 95.00th=[ 167], 00:14:25.894 | 99.00th=[ 182], 99.50th=[ 184], 99.90th=[ 196], 99.95th=[ 302], 00:14:25.894 | 99.99th=[ 355] 00:14:25.894 write: IOPS=3965, BW=15.5MiB/s (16.2MB/s)(15.5MiB/1001msec); 0 zone resets 00:14:25.894 slat (nsec): min=7814, max=41367, avg=8693.87, stdev=974.09 00:14:25.894 clat (usec): min=62, max=192, avg=118.28, stdev=21.80 00:14:25.894 lat (usec): min=70, max=201, avg=126.97, stdev=21.85 00:14:25.894 clat percentiles (usec): 00:14:25.894 | 1.00th=[ 71], 5.00th=[ 80], 10.00th=[ 85], 20.00th=[ 105], 00:14:25.894 | 30.00th=[ 112], 40.00th=[ 116], 50.00th=[ 119], 60.00th=[ 122], 00:14:25.894 | 70.00th=[ 126], 80.00th=[ 133], 90.00th=[ 151], 95.00th=[ 157], 00:14:25.894 | 99.00th=[ 167], 99.50th=[ 172], 99.90th=[ 180], 99.95th=[ 182], 00:14:25.894 | 99.99th=[ 194] 00:14:25.894 bw ( KiB/s): min=16384, max=16384, per=22.56%, avg=16384.00, stdev= 0.00, samples=1 00:14:25.894 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:14:25.894 lat (usec) : 100=15.60%, 250=84.38%, 500=0.03% 00:14:25.894 cpu : usr=2.40%, sys=10.10%, ctx=7553, majf=0, minf=1 00:14:25.894 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:25.894 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:25.894 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:25.894 issued rwts: total=3584,3969,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:25.894 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:25.894 job2: (groupid=0, jobs=1): err= 0: pid=2820464: Mon Jul 15 14:48:59 2024 00:14:25.894 read: IOPS=4789, BW=18.7MiB/s (19.6MB/s)(18.7MiB/1001msec) 00:14:25.894 slat (nsec): min=6756, max=32365, avg=7426.71, stdev=910.64 00:14:25.894 clat (usec): min=74, max=131, avg=91.29, stdev= 6.24 00:14:25.894 lat (usec): min=82, max=138, avg=98.72, stdev= 6.29 00:14:25.894 clat percentiles (usec): 00:14:25.894 | 1.00th=[ 81], 5.00th=[ 83], 10.00th=[ 85], 20.00th=[ 87], 00:14:25.894 | 30.00th=[ 88], 40.00th=[ 90], 50.00th=[ 91], 60.00th=[ 92], 00:14:25.894 | 70.00th=[ 94], 80.00th=[ 96], 90.00th=[ 99], 95.00th=[ 103], 00:14:25.894 | 99.00th=[ 112], 99.50th=[ 115], 99.90th=[ 124], 99.95th=[ 129], 00:14:25.894 | 99.99th=[ 133] 00:14:25.894 write: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec); 0 zone resets 00:14:25.894 slat (nsec): min=8920, max=74043, avg=9687.79, stdev=1304.11 00:14:25.894 clat (usec): min=73, max=125, avg=87.64, stdev= 6.52 00:14:25.894 lat (usec): min=82, max=154, avg=97.33, stdev= 6.64 00:14:25.894 clat percentiles (usec): 00:14:25.894 | 1.00th=[ 77], 5.00th=[ 79], 10.00th=[ 81], 20.00th=[ 83], 00:14:25.894 | 30.00th=[ 84], 40.00th=[ 86], 50.00th=[ 87], 60.00th=[ 89], 00:14:25.894 | 70.00th=[ 90], 80.00th=[ 93], 90.00th=[ 96], 95.00th=[ 100], 00:14:25.894 | 99.00th=[ 110], 99.50th=[ 114], 99.90th=[ 119], 99.95th=[ 123], 00:14:25.894 | 99.99th=[ 126] 00:14:25.894 bw ( KiB/s): min=20480, max=20480, per=28.20%, avg=20480.00, stdev= 0.00, samples=1 00:14:25.894 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:14:25.894 lat (usec) : 100=93.69%, 250=6.31% 00:14:25.894 cpu : usr=7.10%, sys=13.20%, ctx=9914, majf=0, minf=1 00:14:25.894 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:25.894 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:25.894 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:25.894 issued rwts: total=4794,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:25.894 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:25.894 job3: (groupid=0, jobs=1): err= 0: pid=2820469: Mon Jul 15 14:48:59 2024 00:14:25.894 read: IOPS=4979, BW=19.4MiB/s (20.4MB/s)(19.5MiB/1001msec) 00:14:25.894 slat (nsec): min=7097, max=30943, avg=8753.79, stdev=1226.08 00:14:25.894 clat (usec): min=71, max=125, avg=88.87, stdev= 6.03 00:14:25.894 lat (usec): min=80, max=134, avg=97.62, stdev= 6.12 00:14:25.894 clat percentiles (usec): 00:14:25.894 | 1.00th=[ 79], 5.00th=[ 81], 10.00th=[ 83], 20.00th=[ 84], 00:14:25.894 | 30.00th=[ 86], 40.00th=[ 87], 50.00th=[ 89], 60.00th=[ 90], 00:14:25.894 | 70.00th=[ 91], 80.00th=[ 94], 90.00th=[ 96], 95.00th=[ 100], 00:14:25.894 | 99.00th=[ 109], 99.50th=[ 111], 99.90th=[ 123], 99.95th=[ 125], 00:14:25.894 | 99.99th=[ 126] 00:14:25.894 write: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec); 0 zone resets 00:14:25.894 slat (nsec): min=8451, max=41900, avg=10533.76, stdev=1507.31 00:14:25.894 clat (usec): min=68, max=118, avg=84.91, stdev= 6.36 00:14:25.894 lat (usec): min=77, max=145, avg=95.45, stdev= 6.49 00:14:25.894 clat percentiles (usec): 00:14:25.894 | 1.00th=[ 74], 5.00th=[ 77], 10.00th=[ 78], 20.00th=[ 80], 00:14:25.894 | 30.00th=[ 82], 40.00th=[ 83], 50.00th=[ 85], 60.00th=[ 86], 00:14:25.894 | 70.00th=[ 88], 80.00th=[ 90], 90.00th=[ 93], 95.00th=[ 97], 00:14:25.894 | 99.00th=[ 105], 99.50th=[ 110], 99.90th=[ 116], 99.95th=[ 118], 00:14:25.894 | 99.99th=[ 119] 00:14:25.894 bw ( KiB/s): min=20480, max=20480, per=28.20%, avg=20480.00, stdev= 0.00, samples=1 00:14:25.894 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:14:25.894 lat (usec) : 100=96.42%, 250=3.58% 00:14:25.894 cpu : usr=7.40%, sys=12.50%, ctx=10104, majf=0, minf=2 00:14:25.894 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:25.894 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:25.894 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:25.894 issued rwts: total=4984,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:25.894 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:25.894 00:14:25.894 Run status group 0 (all jobs): 00:14:25.894 READ: bw=66.1MiB/s (69.3MB/s), 14.0MiB/s-19.4MiB/s (14.7MB/s-20.4MB/s), io=66.2MiB (69.4MB), run=1001-1001msec 00:14:25.894 WRITE: bw=70.9MiB/s (74.4MB/s), 15.5MiB/s-20.0MiB/s (16.2MB/s-20.9MB/s), io=71.0MiB (74.5MB), run=1001-1001msec 00:14:25.894 00:14:25.894 Disk stats (read/write): 00:14:25.895 nvme0n1: ios=3121/3361, merge=0/0, ticks=393/377, in_queue=770, util=86.27% 00:14:25.895 nvme0n2: ios=3072/3361, merge=0/0, ticks=374/370, in_queue=744, util=86.95% 00:14:25.895 nvme0n3: ios=4096/4350, merge=0/0, ticks=352/343, in_queue=695, util=89.04% 00:14:25.895 nvme0n4: ios=4096/4515, merge=0/0, ticks=343/348, in_queue=691, util=89.70% 00:14:25.895 14:48:59 nvmf_rdma.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:14:25.895 [global] 00:14:25.895 thread=1 00:14:25.895 invalidate=1 00:14:25.895 rw=write 00:14:25.895 time_based=1 00:14:25.895 runtime=1 00:14:25.895 ioengine=libaio 00:14:25.895 direct=1 00:14:25.895 bs=4096 00:14:25.895 iodepth=128 00:14:25.895 norandommap=0 00:14:25.895 numjobs=1 00:14:25.895 00:14:25.895 verify_dump=1 00:14:25.895 verify_backlog=512 00:14:25.895 verify_state_save=0 00:14:25.895 do_verify=1 00:14:25.895 verify=crc32c-intel 00:14:25.895 [job0] 00:14:25.895 filename=/dev/nvme0n1 00:14:25.895 [job1] 00:14:25.895 filename=/dev/nvme0n2 00:14:25.895 [job2] 00:14:25.895 filename=/dev/nvme0n3 00:14:25.895 [job3] 00:14:25.895 filename=/dev/nvme0n4 00:14:25.895 Could not set queue depth (nvme0n1) 00:14:25.895 Could not set queue depth (nvme0n2) 00:14:25.895 Could not set queue depth (nvme0n3) 00:14:25.895 Could not set queue depth (nvme0n4) 00:14:26.153 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:26.153 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:26.153 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:26.153 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:26.153 fio-3.35 00:14:26.153 Starting 4 threads 00:14:27.529 00:14:27.529 job0: (groupid=0, jobs=1): err= 0: pid=2820897: Mon Jul 15 14:49:01 2024 00:14:27.529 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:14:27.529 slat (nsec): min=1564, max=1077.2k, avg=105243.87, stdev=253834.59 00:14:27.529 clat (usec): min=10653, max=19570, avg=13635.14, stdev=2885.72 00:14:27.529 lat (usec): min=11072, max=19947, avg=13740.38, stdev=2897.13 00:14:27.529 clat percentiles (usec): 00:14:27.529 | 1.00th=[11076], 5.00th=[11338], 10.00th=[11600], 20.00th=[11731], 00:14:27.529 | 30.00th=[11863], 40.00th=[12125], 50.00th=[12256], 60.00th=[12518], 00:14:27.529 | 70.00th=[12780], 80.00th=[18220], 90.00th=[19006], 95.00th=[19268], 00:14:27.529 | 99.00th=[19530], 99.50th=[19530], 99.90th=[19530], 99.95th=[19530], 00:14:27.529 | 99.99th=[19530] 00:14:27.529 write: IOPS=5045, BW=19.7MiB/s (20.7MB/s)(19.8MiB/1003msec); 0 zone resets 00:14:27.529 slat (usec): min=2, max=1603, avg=98.91, stdev=239.93 00:14:27.529 clat (usec): min=2739, max=20752, avg=12645.23, stdev=2634.24 00:14:27.529 lat (usec): min=3504, max=20755, avg=12744.14, stdev=2641.43 00:14:27.529 clat percentiles (usec): 00:14:27.529 | 1.00th=[ 9634], 5.00th=[10683], 10.00th=[10945], 20.00th=[11076], 00:14:27.529 | 30.00th=[11207], 40.00th=[11338], 50.00th=[11600], 60.00th=[11863], 00:14:27.529 | 70.00th=[12125], 80.00th=[13173], 90.00th=[17957], 95.00th=[18220], 00:14:27.529 | 99.00th=[18744], 99.50th=[19006], 99.90th=[20055], 99.95th=[20841], 00:14:27.529 | 99.99th=[20841] 00:14:27.529 bw ( KiB/s): min=17440, max=21988, per=21.89%, avg=19714.00, stdev=3215.92, samples=2 00:14:27.529 iops : min= 4360, max= 5497, avg=4928.50, stdev=803.98, samples=2 00:14:27.529 lat (msec) : 4=0.03%, 10=0.51%, 20=99.37%, 50=0.09% 00:14:27.529 cpu : usr=1.60%, sys=4.39%, ctx=1596, majf=0, minf=1 00:14:27.529 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:14:27.529 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:27.529 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:27.530 issued rwts: total=4608,5061,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:27.530 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:27.530 job1: (groupid=0, jobs=1): err= 0: pid=2820898: Mon Jul 15 14:49:01 2024 00:14:27.530 read: IOPS=7160, BW=28.0MiB/s (29.3MB/s)(28.0MiB/1001msec) 00:14:27.530 slat (nsec): min=1443, max=1074.8k, avg=68116.65, stdev=199945.67 00:14:27.530 clat (usec): min=4592, max=13779, avg=8808.12, stdev=3283.31 00:14:27.530 lat (usec): min=4681, max=13796, avg=8876.23, stdev=3304.80 00:14:27.530 clat percentiles (usec): 00:14:27.530 | 1.00th=[ 5014], 5.00th=[ 5211], 10.00th=[ 5276], 20.00th=[ 5407], 00:14:27.530 | 30.00th=[ 5604], 40.00th=[ 5866], 50.00th=[ 9634], 60.00th=[11731], 00:14:27.530 | 70.00th=[11863], 80.00th=[12125], 90.00th=[12518], 95.00th=[12649], 00:14:27.530 | 99.00th=[13435], 99.50th=[13698], 99.90th=[13829], 99.95th=[13829], 00:14:27.530 | 99.99th=[13829] 00:14:27.530 write: IOPS=7335, BW=28.7MiB/s (30.0MB/s)(28.7MiB/1001msec); 0 zone resets 00:14:27.530 slat (nsec): min=1963, max=1471.0k, avg=66967.08, stdev=193272.37 00:14:27.530 clat (usec): min=336, max=13054, avg=8640.20, stdev=3164.31 00:14:27.530 lat (usec): min=918, max=13061, avg=8707.17, stdev=3184.25 00:14:27.530 clat percentiles (usec): 00:14:27.530 | 1.00th=[ 3589], 5.00th=[ 4883], 10.00th=[ 4948], 20.00th=[ 5145], 00:14:27.530 | 30.00th=[ 5342], 40.00th=[ 5604], 50.00th=[10945], 60.00th=[11076], 00:14:27.530 | 70.00th=[11338], 80.00th=[11469], 90.00th=[11994], 95.00th=[12256], 00:14:27.530 | 99.00th=[12911], 99.50th=[13042], 99.90th=[13042], 99.95th=[13042], 00:14:27.530 | 99.99th=[13042] 00:14:27.530 bw ( KiB/s): min=22128, max=22128, per=24.57%, avg=22128.00, stdev= 0.00, samples=1 00:14:27.530 iops : min= 5532, max= 5532, avg=5532.00, stdev= 0.00, samples=1 00:14:27.530 lat (usec) : 500=0.01%, 1000=0.10% 00:14:27.530 lat (msec) : 2=0.10%, 4=0.32%, 10=46.70%, 20=52.79% 00:14:27.530 cpu : usr=2.60%, sys=5.50%, ctx=1689, majf=0, minf=1 00:14:27.530 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:14:27.530 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:27.530 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:27.530 issued rwts: total=7168,7343,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:27.530 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:27.530 job2: (groupid=0, jobs=1): err= 0: pid=2820899: Mon Jul 15 14:49:01 2024 00:14:27.530 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:14:27.530 slat (nsec): min=1529, max=1644.9k, avg=105633.84, stdev=269079.35 00:14:27.530 clat (usec): min=10585, max=19606, avg=13638.71, stdev=2887.04 00:14:27.530 lat (usec): min=10915, max=19614, avg=13744.34, stdev=2897.39 00:14:27.530 clat percentiles (usec): 00:14:27.530 | 1.00th=[10945], 5.00th=[11338], 10.00th=[11600], 20.00th=[11731], 00:14:27.530 | 30.00th=[11863], 40.00th=[12125], 50.00th=[12256], 60.00th=[12518], 00:14:27.530 | 70.00th=[12780], 80.00th=[18220], 90.00th=[19006], 95.00th=[19268], 00:14:27.530 | 99.00th=[19530], 99.50th=[19530], 99.90th=[19530], 99.95th=[19530], 00:14:27.530 | 99.99th=[19530] 00:14:27.530 write: IOPS=5040, BW=19.7MiB/s (20.6MB/s)(19.8MiB/1003msec); 0 zone resets 00:14:27.530 slat (usec): min=2, max=1602, avg=98.68, stdev=252.40 00:14:27.530 clat (usec): min=2738, max=20746, avg=12652.87, stdev=2633.33 00:14:27.530 lat (usec): min=4324, max=20751, avg=12751.55, stdev=2639.30 00:14:27.530 clat percentiles (usec): 00:14:27.530 | 1.00th=[10028], 5.00th=[10683], 10.00th=[10945], 20.00th=[11076], 00:14:27.530 | 30.00th=[11207], 40.00th=[11338], 50.00th=[11600], 60.00th=[11863], 00:14:27.530 | 70.00th=[12125], 80.00th=[13173], 90.00th=[17957], 95.00th=[18220], 00:14:27.530 | 99.00th=[18744], 99.50th=[19006], 99.90th=[20841], 99.95th=[20841], 00:14:27.530 | 99.99th=[20841] 00:14:27.530 bw ( KiB/s): min=17416, max=22016, per=21.89%, avg=19716.00, stdev=3252.69, samples=2 00:14:27.530 iops : min= 4354, max= 5504, avg=4929.00, stdev=813.17, samples=2 00:14:27.530 lat (msec) : 4=0.01%, 10=0.51%, 20=99.37%, 50=0.11% 00:14:27.530 cpu : usr=2.89%, sys=3.09%, ctx=1638, majf=0, minf=1 00:14:27.530 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:14:27.530 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:27.530 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:27.530 issued rwts: total=4608,5056,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:27.530 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:27.530 job3: (groupid=0, jobs=1): err= 0: pid=2820900: Mon Jul 15 14:49:01 2024 00:14:27.530 read: IOPS=4597, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:14:27.530 slat (nsec): min=1271, max=2919.9k, avg=103918.14, stdev=296385.82 00:14:27.530 clat (usec): min=2728, max=19619, avg=13533.66, stdev=2884.10 00:14:27.530 lat (usec): min=3457, max=19626, avg=13637.58, stdev=2889.84 00:14:27.530 clat percentiles (usec): 00:14:27.530 | 1.00th=[10945], 5.00th=[11338], 10.00th=[11600], 20.00th=[11731], 00:14:27.530 | 30.00th=[11863], 40.00th=[12125], 50.00th=[12256], 60.00th=[12387], 00:14:27.530 | 70.00th=[12649], 80.00th=[17957], 90.00th=[19006], 95.00th=[19268], 00:14:27.530 | 99.00th=[19530], 99.50th=[19530], 99.90th=[19530], 99.95th=[19530], 00:14:27.530 | 99.99th=[19530] 00:14:27.530 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:14:27.530 slat (usec): min=2, max=2376, avg=98.95, stdev=279.44 00:14:27.530 clat (usec): min=3464, max=20024, avg=12586.27, stdev=2691.82 00:14:27.530 lat (usec): min=3467, max=20750, avg=12685.22, stdev=2697.89 00:14:27.530 clat percentiles (usec): 00:14:27.530 | 1.00th=[ 9372], 5.00th=[10552], 10.00th=[10945], 20.00th=[11076], 00:14:27.530 | 30.00th=[11207], 40.00th=[11338], 50.00th=[11469], 60.00th=[11731], 00:14:27.530 | 70.00th=[11994], 80.00th=[12911], 90.00th=[18220], 95.00th=[18220], 00:14:27.530 | 99.00th=[18482], 99.50th=[18744], 99.90th=[19268], 99.95th=[20055], 00:14:27.530 | 99.99th=[20055] 00:14:27.530 bw ( KiB/s): min=17792, max=22176, per=22.19%, avg=19984.00, stdev=3099.96, samples=2 00:14:27.530 iops : min= 4448, max= 5544, avg=4996.00, stdev=774.99, samples=2 00:14:27.530 lat (msec) : 4=0.16%, 10=0.64%, 20=99.15%, 50=0.05% 00:14:27.530 cpu : usr=2.20%, sys=3.49%, ctx=1638, majf=0, minf=1 00:14:27.530 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:14:27.530 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:27.530 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:27.530 issued rwts: total=4611,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:27.530 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:27.530 00:14:27.530 Run status group 0 (all jobs): 00:14:27.530 READ: bw=81.8MiB/s (85.7MB/s), 17.9MiB/s-28.0MiB/s (18.8MB/s-29.3MB/s), io=82.0MiB (86.0MB), run=1001-1003msec 00:14:27.530 WRITE: bw=87.9MiB/s (92.2MB/s), 19.7MiB/s-28.7MiB/s (20.6MB/s-30.0MB/s), io=88.2MiB (92.5MB), run=1001-1003msec 00:14:27.530 00:14:27.530 Disk stats (read/write): 00:14:27.530 nvme0n1: ios=4145/4361, merge=0/0, ticks=13267/12908, in_queue=26175, util=84.07% 00:14:27.530 nvme0n2: ios=5120/5294, merge=0/0, ticks=13023/13134, in_queue=26157, util=84.68% 00:14:27.530 nvme0n3: ios=4096/4359, merge=0/0, ticks=13270/12911, in_queue=26181, util=88.19% 00:14:27.530 nvme0n4: ios=4096/4420, merge=0/0, ticks=13109/13141, in_queue=26250, util=89.33% 00:14:27.530 14:49:01 nvmf_rdma.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:14:27.530 [global] 00:14:27.530 thread=1 00:14:27.530 invalidate=1 00:14:27.530 rw=randwrite 00:14:27.530 time_based=1 00:14:27.530 runtime=1 00:14:27.530 ioengine=libaio 00:14:27.530 direct=1 00:14:27.530 bs=4096 00:14:27.530 iodepth=128 00:14:27.530 norandommap=0 00:14:27.530 numjobs=1 00:14:27.530 00:14:27.530 verify_dump=1 00:14:27.530 verify_backlog=512 00:14:27.530 verify_state_save=0 00:14:27.530 do_verify=1 00:14:27.530 verify=crc32c-intel 00:14:27.530 [job0] 00:14:27.530 filename=/dev/nvme0n1 00:14:27.530 [job1] 00:14:27.530 filename=/dev/nvme0n2 00:14:27.530 [job2] 00:14:27.530 filename=/dev/nvme0n3 00:14:27.530 [job3] 00:14:27.530 filename=/dev/nvme0n4 00:14:27.530 Could not set queue depth (nvme0n1) 00:14:27.530 Could not set queue depth (nvme0n2) 00:14:27.530 Could not set queue depth (nvme0n3) 00:14:27.530 Could not set queue depth (nvme0n4) 00:14:27.789 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:27.789 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:27.789 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:27.789 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:27.789 fio-3.35 00:14:27.789 Starting 4 threads 00:14:29.183 00:14:29.184 job0: (groupid=0, jobs=1): err= 0: pid=2821270: Mon Jul 15 14:49:02 2024 00:14:29.184 read: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec) 00:14:29.184 slat (nsec): min=1350, max=1826.0k, avg=95053.89, stdev=287001.34 00:14:29.184 clat (usec): min=6172, max=18813, avg=12320.96, stdev=1984.78 00:14:29.184 lat (usec): min=6175, max=19165, avg=12416.02, stdev=1982.99 00:14:29.184 clat percentiles (usec): 00:14:29.184 | 1.00th=[ 7046], 5.00th=[10290], 10.00th=[11338], 20.00th=[11731], 00:14:29.184 | 30.00th=[11863], 40.00th=[11863], 50.00th=[11994], 60.00th=[12125], 00:14:29.184 | 70.00th=[12256], 80.00th=[12387], 90.00th=[12780], 95.00th=[17695], 00:14:29.184 | 99.00th=[18220], 99.50th=[18220], 99.90th=[18482], 99.95th=[18482], 00:14:29.184 | 99.99th=[18744] 00:14:29.184 write: IOPS=5351, BW=20.9MiB/s (21.9MB/s)(21.0MiB/1003msec); 0 zone resets 00:14:29.184 slat (nsec): min=1911, max=1779.3k, avg=93407.40, stdev=282157.04 00:14:29.184 clat (usec): min=2410, max=18163, avg=11875.62, stdev=2343.83 00:14:29.184 lat (usec): min=2953, max=18484, avg=11969.02, stdev=2344.21 00:14:29.184 clat percentiles (usec): 00:14:29.184 | 1.00th=[ 4883], 5.00th=[ 6652], 10.00th=[ 9372], 20.00th=[11600], 00:14:29.184 | 30.00th=[11863], 40.00th=[11994], 50.00th=[11994], 60.00th=[12125], 00:14:29.184 | 70.00th=[12256], 80.00th=[12387], 90.00th=[12780], 95.00th=[16712], 00:14:29.184 | 99.00th=[17171], 99.50th=[17433], 99.90th=[17695], 99.95th=[17957], 00:14:29.184 | 99.99th=[18220] 00:14:29.184 bw ( KiB/s): min=20480, max=21448, per=24.21%, avg=20964.00, stdev=684.48, samples=2 00:14:29.184 iops : min= 5120, max= 5362, avg=5241.00, stdev=171.12, samples=2 00:14:29.184 lat (msec) : 4=0.25%, 10=6.93%, 20=92.82% 00:14:29.184 cpu : usr=1.70%, sys=3.09%, ctx=2496, majf=0, minf=1 00:14:29.184 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:14:29.184 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:29.184 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:29.184 issued rwts: total=5120,5368,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:29.184 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:29.184 job1: (groupid=0, jobs=1): err= 0: pid=2821271: Mon Jul 15 14:49:02 2024 00:14:29.184 read: IOPS=4949, BW=19.3MiB/s (20.3MB/s)(19.4MiB/1004msec) 00:14:29.184 slat (nsec): min=1486, max=1505.1k, avg=99750.47, stdev=250667.94 00:14:29.184 clat (usec): min=2868, max=19438, avg=12810.64, stdev=1897.39 00:14:29.184 lat (usec): min=3595, max=19442, avg=12910.39, stdev=1905.29 00:14:29.184 clat percentiles (usec): 00:14:29.184 | 1.00th=[ 8455], 5.00th=[11469], 10.00th=[11731], 20.00th=[12125], 00:14:29.184 | 30.00th=[12125], 40.00th=[12256], 50.00th=[12387], 60.00th=[12518], 00:14:29.184 | 70.00th=[12649], 80.00th=[12780], 90.00th=[16909], 95.00th=[17695], 00:14:29.184 | 99.00th=[18482], 99.50th=[18744], 99.90th=[19006], 99.95th=[19006], 00:14:29.184 | 99.99th=[19530] 00:14:29.184 write: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec); 0 zone resets 00:14:29.184 slat (nsec): min=1939, max=2064.3k, avg=96299.20, stdev=244564.28 00:14:29.184 clat (usec): min=10460, max=18522, avg=12332.95, stdev=1692.03 00:14:29.184 lat (usec): min=10464, max=18525, avg=12429.25, stdev=1700.28 00:14:29.184 clat percentiles (usec): 00:14:29.184 | 1.00th=[10683], 5.00th=[10945], 10.00th=[11076], 20.00th=[11469], 00:14:29.184 | 30.00th=[11600], 40.00th=[11600], 50.00th=[11731], 60.00th=[11863], 00:14:29.184 | 70.00th=[12125], 80.00th=[12387], 90.00th=[16057], 95.00th=[16712], 00:14:29.184 | 99.00th=[17433], 99.50th=[17695], 99.90th=[18220], 99.95th=[18220], 00:14:29.184 | 99.99th=[18482] 00:14:29.184 bw ( KiB/s): min=20480, max=20480, per=23.66%, avg=20480.00, stdev= 0.00, samples=2 00:14:29.184 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:14:29.184 lat (msec) : 4=0.10%, 10=0.54%, 20=99.37% 00:14:29.184 cpu : usr=1.99%, sys=2.79%, ctx=1836, majf=0, minf=1 00:14:29.184 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:14:29.184 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:29.184 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:29.184 issued rwts: total=4969,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:29.184 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:29.184 job2: (groupid=0, jobs=1): err= 0: pid=2821273: Mon Jul 15 14:49:02 2024 00:14:29.184 read: IOPS=5915, BW=23.1MiB/s (24.2MB/s)(23.2MiB/1003msec) 00:14:29.184 slat (nsec): min=1488, max=1540.7k, avg=83638.44, stdev=265309.50 00:14:29.184 clat (usec): min=1600, max=14164, avg=10658.91, stdev=2342.59 00:14:29.184 lat (usec): min=2411, max=14167, avg=10742.55, stdev=2346.74 00:14:29.184 clat percentiles (usec): 00:14:29.184 | 1.00th=[ 6063], 5.00th=[ 6325], 10.00th=[ 6587], 20.00th=[ 7308], 00:14:29.184 | 30.00th=[11076], 40.00th=[11863], 50.00th=[11994], 60.00th=[12125], 00:14:29.184 | 70.00th=[12125], 80.00th=[12256], 90.00th=[12387], 95.00th=[12518], 00:14:29.184 | 99.00th=[12649], 99.50th=[12780], 99.90th=[13304], 99.95th=[14222], 00:14:29.184 | 99.99th=[14222] 00:14:29.184 write: IOPS=6125, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1003msec); 0 zone resets 00:14:29.184 slat (nsec): min=1984, max=2542.4k, avg=80050.48, stdev=254397.37 00:14:29.184 clat (usec): min=5741, max=12931, avg=10373.44, stdev=2512.61 00:14:29.184 lat (usec): min=5748, max=13262, avg=10453.49, stdev=2520.78 00:14:29.184 clat percentiles (usec): 00:14:29.184 | 1.00th=[ 5800], 5.00th=[ 5932], 10.00th=[ 6128], 20.00th=[ 6652], 00:14:29.184 | 30.00th=[10552], 40.00th=[11600], 50.00th=[11863], 60.00th=[11863], 00:14:29.184 | 70.00th=[11994], 80.00th=[12125], 90.00th=[12387], 95.00th=[12387], 00:14:29.184 | 99.00th=[12780], 99.50th=[12780], 99.90th=[12911], 99.95th=[12911], 00:14:29.184 | 99.99th=[12911] 00:14:29.184 bw ( KiB/s): min=20480, max=28672, per=28.39%, avg=24576.00, stdev=5792.62, samples=2 00:14:29.184 iops : min= 5120, max= 7168, avg=6144.00, stdev=1448.15, samples=2 00:14:29.184 lat (msec) : 2=0.01%, 4=0.22%, 10=28.00%, 20=71.76% 00:14:29.184 cpu : usr=1.40%, sys=3.99%, ctx=1968, majf=0, minf=1 00:14:29.184 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:14:29.184 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:29.184 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:29.184 issued rwts: total=5933,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:29.184 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:29.184 job3: (groupid=0, jobs=1): err= 0: pid=2821274: Mon Jul 15 14:49:02 2024 00:14:29.184 read: IOPS=4940, BW=19.3MiB/s (20.2MB/s)(19.4MiB/1005msec) 00:14:29.184 slat (nsec): min=1531, max=1720.5k, avg=100505.63, stdev=252631.18 00:14:29.184 clat (usec): min=2842, max=18914, avg=12816.48, stdev=1886.01 00:14:29.184 lat (usec): min=3597, max=19140, avg=12916.99, stdev=1893.48 00:14:29.184 clat percentiles (usec): 00:14:29.184 | 1.00th=[ 8455], 5.00th=[11469], 10.00th=[11731], 20.00th=[12125], 00:14:29.184 | 30.00th=[12125], 40.00th=[12256], 50.00th=[12387], 60.00th=[12518], 00:14:29.184 | 70.00th=[12649], 80.00th=[12780], 90.00th=[16909], 95.00th=[17695], 00:14:29.184 | 99.00th=[18220], 99.50th=[18482], 99.90th=[18744], 99.95th=[19006], 00:14:29.184 | 99.99th=[19006] 00:14:29.184 write: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec); 0 zone resets 00:14:29.184 slat (nsec): min=1961, max=2049.3k, avg=95674.67, stdev=241542.60 00:14:29.184 clat (usec): min=10455, max=18394, avg=12338.29, stdev=1692.02 00:14:29.184 lat (usec): min=10458, max=18398, avg=12433.96, stdev=1701.66 00:14:29.184 clat percentiles (usec): 00:14:29.184 | 1.00th=[10683], 5.00th=[10945], 10.00th=[11207], 20.00th=[11469], 00:14:29.184 | 30.00th=[11600], 40.00th=[11600], 50.00th=[11731], 60.00th=[11863], 00:14:29.184 | 70.00th=[12125], 80.00th=[12387], 90.00th=[16057], 95.00th=[16909], 00:14:29.184 | 99.00th=[17433], 99.50th=[17433], 99.90th=[17957], 99.95th=[18220], 00:14:29.184 | 99.99th=[18482] 00:14:29.184 bw ( KiB/s): min=20480, max=20480, per=23.66%, avg=20480.00, stdev= 0.00, samples=2 00:14:29.184 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:14:29.184 lat (msec) : 4=0.08%, 10=0.57%, 20=99.36% 00:14:29.184 cpu : usr=1.89%, sys=2.99%, ctx=1838, majf=0, minf=1 00:14:29.184 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:14:29.184 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:29.184 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:29.184 issued rwts: total=4965,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:29.184 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:29.184 00:14:29.184 Run status group 0 (all jobs): 00:14:29.184 READ: bw=81.6MiB/s (85.5MB/s), 19.3MiB/s-23.1MiB/s (20.2MB/s-24.2MB/s), io=82.0MiB (86.0MB), run=1003-1005msec 00:14:29.184 WRITE: bw=84.5MiB/s (88.7MB/s), 19.9MiB/s-23.9MiB/s (20.9MB/s-25.1MB/s), io=85.0MiB (89.1MB), run=1003-1005msec 00:14:29.184 00:14:29.184 Disk stats (read/write): 00:14:29.184 nvme0n1: ios=4438/4608, merge=0/0, ticks=14325/14602, in_queue=28927, util=87.17% 00:14:29.184 nvme0n2: ios=4096/4485, merge=0/0, ticks=17567/18335, in_queue=35902, util=87.34% 00:14:29.184 nvme0n3: ios=5120/5452, merge=0/0, ticks=13540/13761, in_queue=27301, util=89.14% 00:14:29.184 nvme0n4: ios=4096/4482, merge=0/0, ticks=17564/18332, in_queue=35896, util=89.70% 00:14:29.184 14:49:02 nvmf_rdma.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:14:29.184 14:49:02 nvmf_rdma.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2821500 00:14:29.185 14:49:02 nvmf_rdma.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:14:29.185 14:49:02 nvmf_rdma.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:14:29.185 [global] 00:14:29.185 thread=1 00:14:29.185 invalidate=1 00:14:29.185 rw=read 00:14:29.185 time_based=1 00:14:29.185 runtime=10 00:14:29.185 ioengine=libaio 00:14:29.185 direct=1 00:14:29.185 bs=4096 00:14:29.185 iodepth=1 00:14:29.185 norandommap=1 00:14:29.185 numjobs=1 00:14:29.185 00:14:29.185 [job0] 00:14:29.185 filename=/dev/nvme0n1 00:14:29.185 [job1] 00:14:29.185 filename=/dev/nvme0n2 00:14:29.185 [job2] 00:14:29.185 filename=/dev/nvme0n3 00:14:29.185 [job3] 00:14:29.185 filename=/dev/nvme0n4 00:14:29.185 Could not set queue depth (nvme0n1) 00:14:29.185 Could not set queue depth (nvme0n2) 00:14:29.185 Could not set queue depth (nvme0n3) 00:14:29.185 Could not set queue depth (nvme0n4) 00:14:29.457 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:29.457 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:29.457 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:29.457 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:29.457 fio-3.35 00:14:29.457 Starting 4 threads 00:14:31.984 14:49:05 nvmf_rdma.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:14:32.241 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=114696192, buflen=4096 00:14:32.241 fio: pid=2821649, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:14:32.241 14:49:05 nvmf_rdma.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:14:32.241 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=83005440, buflen=4096 00:14:32.241 fio: pid=2821648, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:14:32.241 14:49:06 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:32.241 14:49:06 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:14:32.498 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=55058432, buflen=4096 00:14:32.498 fio: pid=2821646, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:14:32.498 14:49:06 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:32.498 14:49:06 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:14:32.755 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=47071232, buflen=4096 00:14:32.755 fio: pid=2821647, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:14:32.755 14:49:06 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:32.755 14:49:06 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:14:32.755 00:14:32.755 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2821646: Mon Jul 15 14:49:06 2024 00:14:32.755 read: IOPS=9805, BW=38.3MiB/s (40.2MB/s)(117MiB/3042msec) 00:14:32.755 slat (usec): min=6, max=25733, avg= 8.96, stdev=194.91 00:14:32.755 clat (usec): min=50, max=284, avg=91.77, stdev=23.49 00:14:32.755 lat (usec): min=57, max=25823, avg=100.74, stdev=196.33 00:14:32.755 clat percentiles (usec): 00:14:32.755 | 1.00th=[ 70], 5.00th=[ 75], 10.00th=[ 77], 20.00th=[ 80], 00:14:32.755 | 30.00th=[ 81], 40.00th=[ 83], 50.00th=[ 85], 60.00th=[ 87], 00:14:32.755 | 70.00th=[ 90], 80.00th=[ 94], 90.00th=[ 133], 95.00th=[ 145], 00:14:32.755 | 99.00th=[ 190], 99.50th=[ 194], 99.90th=[ 204], 99.95th=[ 208], 00:14:32.755 | 99.99th=[ 265] 00:14:32.755 bw ( KiB/s): min=27144, max=43496, per=30.10%, avg=39304.00, stdev=7072.88, samples=5 00:14:32.755 iops : min= 6786, max=10874, avg=9826.00, stdev=1768.22, samples=5 00:14:32.755 lat (usec) : 100=85.77%, 250=14.22%, 500=0.01% 00:14:32.755 cpu : usr=2.76%, sys=11.08%, ctx=29832, majf=0, minf=1 00:14:32.755 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:32.755 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:32.755 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:32.755 issued rwts: total=29827,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:32.755 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:32.755 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2821647: Mon Jul 15 14:49:06 2024 00:14:32.755 read: IOPS=8588, BW=33.5MiB/s (35.2MB/s)(109MiB/3246msec) 00:14:32.755 slat (usec): min=6, max=15768, avg= 9.23, stdev=165.53 00:14:32.755 clat (usec): min=50, max=325, avg=105.40, stdev=35.33 00:14:32.755 lat (usec): min=56, max=15860, avg=114.63, stdev=169.29 00:14:32.755 clat percentiles (usec): 00:14:32.755 | 1.00th=[ 55], 5.00th=[ 59], 10.00th=[ 64], 20.00th=[ 75], 00:14:32.755 | 30.00th=[ 79], 40.00th=[ 83], 50.00th=[ 92], 60.00th=[ 125], 00:14:32.755 | 70.00th=[ 133], 80.00th=[ 137], 90.00th=[ 147], 95.00th=[ 174], 00:14:32.755 | 99.00th=[ 188], 99.50th=[ 192], 99.90th=[ 202], 99.95th=[ 206], 00:14:32.755 | 99.99th=[ 219] 00:14:32.755 bw ( KiB/s): min=28200, max=46064, per=25.71%, avg=33572.17, stdev=7203.14, samples=6 00:14:32.755 iops : min= 7050, max=11516, avg=8393.00, stdev=1800.76, samples=6 00:14:32.755 lat (usec) : 100=53.62%, 250=46.38%, 500=0.01% 00:14:32.755 cpu : usr=2.93%, sys=9.43%, ctx=27884, majf=0, minf=1 00:14:32.755 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:32.755 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:32.755 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:32.755 issued rwts: total=27877,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:32.755 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:32.755 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2821648: Mon Jul 15 14:49:06 2024 00:14:32.755 read: IOPS=7034, BW=27.5MiB/s (28.8MB/s)(79.2MiB/2881msec) 00:14:32.755 slat (usec): min=5, max=10893, avg= 7.89, stdev=93.64 00:14:32.755 clat (usec): min=62, max=325, avg=131.54, stdev=19.17 00:14:32.755 lat (usec): min=68, max=11017, avg=139.43, stdev=95.43 00:14:32.755 clat percentiles (usec): 00:14:32.755 | 1.00th=[ 80], 5.00th=[ 88], 10.00th=[ 103], 20.00th=[ 124], 00:14:32.755 | 30.00th=[ 129], 40.00th=[ 133], 50.00th=[ 135], 60.00th=[ 137], 00:14:32.755 | 70.00th=[ 139], 80.00th=[ 143], 90.00th=[ 149], 95.00th=[ 161], 00:14:32.755 | 99.00th=[ 180], 99.50th=[ 184], 99.90th=[ 194], 99.95th=[ 202], 00:14:32.755 | 99.99th=[ 251] 00:14:32.755 bw ( KiB/s): min=27136, max=28264, per=21.39%, avg=27936.00, stdev=472.27, samples=5 00:14:32.755 iops : min= 6784, max= 7066, avg=6984.00, stdev=118.07, samples=5 00:14:32.755 lat (usec) : 100=9.25%, 250=90.73%, 500=0.01% 00:14:32.755 cpu : usr=2.08%, sys=7.33%, ctx=20269, majf=0, minf=1 00:14:32.755 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:32.755 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:32.755 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:32.755 issued rwts: total=20266,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:32.755 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:32.755 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2821649: Mon Jul 15 14:49:06 2024 00:14:32.755 read: IOPS=10.4k, BW=40.7MiB/s (42.7MB/s)(109MiB/2688msec) 00:14:32.755 slat (nsec): min=6193, max=41520, avg=7039.44, stdev=684.05 00:14:32.755 clat (usec): min=65, max=214, avg=87.70, stdev= 6.27 00:14:32.755 lat (usec): min=78, max=225, avg=94.74, stdev= 6.32 00:14:32.755 clat percentiles (usec): 00:14:32.755 | 1.00th=[ 77], 5.00th=[ 80], 10.00th=[ 81], 20.00th=[ 83], 00:14:32.755 | 30.00th=[ 85], 40.00th=[ 86], 50.00th=[ 87], 60.00th=[ 89], 00:14:32.755 | 70.00th=[ 90], 80.00th=[ 93], 90.00th=[ 96], 95.00th=[ 99], 00:14:32.756 | 99.00th=[ 106], 99.50th=[ 110], 99.90th=[ 115], 99.95th=[ 118], 00:14:32.756 | 99.99th=[ 131] 00:14:32.756 bw ( KiB/s): min=41776, max=41968, per=32.07%, avg=41880.00, stdev=92.95, samples=5 00:14:32.756 iops : min=10444, max=10492, avg=10470.00, stdev=23.24, samples=5 00:14:32.756 lat (usec) : 100=95.80%, 250=4.20% 00:14:32.756 cpu : usr=2.87%, sys=11.95%, ctx=28003, majf=0, minf=2 00:14:32.756 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:32.756 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:32.756 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:32.756 issued rwts: total=28003,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:32.756 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:32.756 00:14:32.756 Run status group 0 (all jobs): 00:14:32.756 READ: bw=128MiB/s (134MB/s), 27.5MiB/s-40.7MiB/s (28.8MB/s-42.7MB/s), io=414MiB (434MB), run=2688-3246msec 00:14:32.756 00:14:32.756 Disk stats (read/write): 00:14:32.756 nvme0n1: ios=28255/0, merge=0/0, ticks=2421/0, in_queue=2421, util=94.89% 00:14:32.756 nvme0n2: ios=26217/0, merge=0/0, ticks=2654/0, in_queue=2654, util=94.41% 00:14:32.756 nvme0n3: ios=20265/0, merge=0/0, ticks=2545/0, in_queue=2545, util=95.99% 00:14:32.756 nvme0n4: ios=27326/0, merge=0/0, ticks=2149/0, in_queue=2149, util=96.46% 00:14:33.013 14:49:06 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:33.013 14:49:06 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:14:33.269 14:49:06 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:33.269 14:49:06 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:14:33.269 14:49:07 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:33.269 14:49:07 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:14:33.526 14:49:07 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:33.526 14:49:07 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:14:33.782 14:49:07 nvmf_rdma.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:14:33.782 14:49:07 nvmf_rdma.nvmf_fio_target -- target/fio.sh@70 -- # wait 2821500 00:14:33.782 14:49:07 nvmf_rdma.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:14:33.782 14:49:07 nvmf_rdma.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:34.711 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:34.711 14:49:08 nvmf_rdma.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:34.711 14:49:08 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:14:34.711 14:49:08 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:34.711 14:49:08 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:34.711 14:49:08 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:34.711 14:49:08 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:34.711 14:49:08 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:14:34.711 14:49:08 nvmf_rdma.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:14:34.711 14:49:08 nvmf_rdma.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:14:34.711 nvmf hotplug test: fio failed as expected 00:14:34.711 14:49:08 nvmf_rdma.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:34.968 14:49:08 nvmf_rdma.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:14:34.968 14:49:08 nvmf_rdma.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:14:34.968 14:49:08 nvmf_rdma.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:14:34.968 14:49:08 nvmf_rdma.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:14:34.968 14:49:08 nvmf_rdma.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:14:34.968 14:49:08 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:34.968 14:49:08 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:14:34.968 14:49:08 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:14:34.968 14:49:08 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:14:34.968 14:49:08 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:14:34.968 14:49:08 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:34.968 14:49:08 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:14:34.968 rmmod nvme_rdma 00:14:34.968 rmmod nvme_fabrics 00:14:34.968 14:49:08 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:34.968 14:49:08 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:14:34.968 14:49:08 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:14:34.968 14:49:08 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 2818558 ']' 00:14:34.968 14:49:08 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 2818558 00:14:34.968 14:49:08 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 2818558 ']' 00:14:34.968 14:49:08 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 2818558 00:14:34.968 14:49:08 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:14:34.968 14:49:08 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:34.968 14:49:08 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2818558 00:14:34.968 14:49:08 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:34.968 14:49:08 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:34.968 14:49:08 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2818558' 00:14:34.968 killing process with pid 2818558 00:14:34.968 14:49:08 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 2818558 00:14:34.968 14:49:08 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 2818558 00:14:35.226 14:49:09 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:35.226 14:49:09 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:14:35.226 00:14:35.226 real 0m24.466s 00:14:35.226 user 1m50.203s 00:14:35.226 sys 0m8.067s 00:14:35.226 14:49:09 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:35.226 14:49:09 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.226 ************************************ 00:14:35.226 END TEST nvmf_fio_target 00:14:35.226 ************************************ 00:14:35.226 14:49:09 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:14:35.226 14:49:09 nvmf_rdma -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:14:35.226 14:49:09 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:35.226 14:49:09 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:35.226 14:49:09 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:14:35.226 ************************************ 00:14:35.226 START TEST nvmf_bdevio 00:14:35.226 ************************************ 00:14:35.226 14:49:09 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:14:35.484 * Looking for test storage... 00:14:35.484 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:35.484 14:49:09 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:35.484 14:49:09 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:14:35.484 14:49:09 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:35.484 14:49:09 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:35.485 14:49:09 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:35.485 14:49:09 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:35.485 14:49:09 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:35.485 14:49:09 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:35.485 14:49:09 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:35.485 14:49:09 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:35.485 14:49:09 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:35.485 14:49:09 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:35.485 14:49:09 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:14:35.485 14:49:09 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:14:35.485 14:49:09 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:35.485 14:49:09 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:35.485 14:49:09 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:35.485 14:49:09 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:35.485 14:49:09 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:35.485 14:49:09 nvmf_rdma.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:35.485 14:49:09 nvmf_rdma.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:35.485 14:49:09 nvmf_rdma.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:35.485 14:49:09 nvmf_rdma.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.485 14:49:09 nvmf_rdma.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.485 14:49:09 nvmf_rdma.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.485 14:49:09 nvmf_rdma.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:14:35.485 14:49:09 nvmf_rdma.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.485 14:49:09 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:14:35.485 14:49:09 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:35.485 14:49:09 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:35.485 14:49:09 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:35.485 14:49:09 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:35.485 14:49:09 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:35.485 14:49:09 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:35.485 14:49:09 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:35.485 14:49:09 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:35.485 14:49:09 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:35.485 14:49:09 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:35.485 14:49:09 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:14:35.485 14:49:09 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:14:35.485 14:49:09 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:35.485 14:49:09 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:35.485 14:49:09 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:35.485 14:49:09 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:35.485 14:49:09 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:35.485 14:49:09 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:35.485 14:49:09 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:35.485 14:49:09 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:35.485 14:49:09 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:35.485 14:49:09 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:14:35.485 14:49:09 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:40.745 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:40.745 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:14:40.745 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:40.745 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:40.745 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:40.745 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:40.745 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:40.745 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:14:40.745 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:40.745 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:14:40.745 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:14:40.745 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:14:40.745 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:14:40.745 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:14:40.745 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:14:40.745 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:40.745 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:40.745 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:40.745 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:40.745 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:40.745 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:40.745 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:40.745 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:40.745 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:40.745 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:40.745 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:40.745 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:40.745 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:14:40.745 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:14:40.745 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:14:40.745 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:14:40.745 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:14:40.745 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:40.745 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:40.745 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:14:40.745 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:14:40.745 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:40.745 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:40.745 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:40.745 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:40.745 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:40.745 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:40.745 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:40.745 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:14:40.745 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:14:40.745 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:40.745 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:40.745 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:40.745 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:40.745 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:40.745 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:40.745 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:40.745 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:14:40.745 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:40.745 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:40.745 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:40.745 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:40.745 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:14:40.746 Found net devices under 0000:da:00.0: mlx_0_0 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:14:40.746 Found net devices under 0000:da:00.1: mlx_0_1 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@420 -- # rdma_device_init 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@58 -- # uname 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@62 -- # modprobe ib_cm 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@63 -- # modprobe ib_core 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@64 -- # modprobe ib_umad 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@66 -- # modprobe iw_cm 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@502 -- # allocate_nic_ips 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@73 -- # get_rdma_if_list 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@105 -- # continue 2 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@105 -- # continue 2 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:14:40.746 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:40.746 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:14:40.746 altname enp218s0f0np0 00:14:40.746 altname ens818f0np0 00:14:40.746 inet 192.168.100.8/24 scope global mlx_0_0 00:14:40.746 valid_lft forever preferred_lft forever 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:14:40.746 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:40.746 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:14:40.746 altname enp218s0f1np1 00:14:40.746 altname ens818f1np1 00:14:40.746 inet 192.168.100.9/24 scope global mlx_0_1 00:14:40.746 valid_lft forever preferred_lft forever 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@86 -- # get_rdma_if_list 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@105 -- # continue 2 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@105 -- # continue 2 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:14:40.746 192.168.100.9' 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:14:40.746 192.168.100.9' 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@457 -- # head -n 1 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@458 -- # tail -n +2 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:14:40.746 192.168.100.9' 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@458 -- # head -n 1 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=2825653 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 2825653 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 2825653 ']' 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:40.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:40.746 14:49:14 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:40.746 [2024-07-15 14:49:14.421936] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:14:40.746 [2024-07-15 14:49:14.421988] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:40.746 EAL: No free 2048 kB hugepages reported on node 1 00:14:40.746 [2024-07-15 14:49:14.477980] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:40.746 [2024-07-15 14:49:14.553089] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:40.746 [2024-07-15 14:49:14.553130] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:40.746 [2024-07-15 14:49:14.553137] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:40.746 [2024-07-15 14:49:14.553142] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:40.746 [2024-07-15 14:49:14.553147] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:40.746 [2024-07-15 14:49:14.553275] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:14:40.746 [2024-07-15 14:49:14.553365] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:14:40.746 [2024-07-15 14:49:14.553472] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:40.746 [2024-07-15 14:49:14.553473] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:14:41.679 14:49:15 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:41.679 14:49:15 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:14:41.679 14:49:15 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:41.679 14:49:15 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:41.679 14:49:15 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:41.679 14:49:15 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:41.679 14:49:15 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:14:41.679 14:49:15 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.679 14:49:15 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:41.679 [2024-07-15 14:49:15.305267] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x21883d0/0x218c8c0) succeed. 00:14:41.679 [2024-07-15 14:49:15.314515] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x21899c0/0x21cdf50) succeed. 00:14:41.679 14:49:15 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.679 14:49:15 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:41.679 14:49:15 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.679 14:49:15 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:41.679 Malloc0 00:14:41.679 14:49:15 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.679 14:49:15 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:41.679 14:49:15 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.679 14:49:15 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:41.679 14:49:15 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.679 14:49:15 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:41.679 14:49:15 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.679 14:49:15 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:41.679 14:49:15 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.679 14:49:15 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:41.679 14:49:15 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.679 14:49:15 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:41.679 [2024-07-15 14:49:15.475534] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:41.679 14:49:15 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.679 14:49:15 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:14:41.679 14:49:15 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:14:41.679 14:49:15 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:14:41.679 14:49:15 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:14:41.679 14:49:15 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:41.679 14:49:15 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:41.679 { 00:14:41.679 "params": { 00:14:41.679 "name": "Nvme$subsystem", 00:14:41.679 "trtype": "$TEST_TRANSPORT", 00:14:41.679 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:41.679 "adrfam": "ipv4", 00:14:41.679 "trsvcid": "$NVMF_PORT", 00:14:41.679 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:41.679 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:41.679 "hdgst": ${hdgst:-false}, 00:14:41.679 "ddgst": ${ddgst:-false} 00:14:41.679 }, 00:14:41.679 "method": "bdev_nvme_attach_controller" 00:14:41.679 } 00:14:41.679 EOF 00:14:41.679 )") 00:14:41.679 14:49:15 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:14:41.679 14:49:15 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:14:41.679 14:49:15 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:14:41.679 14:49:15 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:41.679 "params": { 00:14:41.679 "name": "Nvme1", 00:14:41.679 "trtype": "rdma", 00:14:41.679 "traddr": "192.168.100.8", 00:14:41.679 "adrfam": "ipv4", 00:14:41.679 "trsvcid": "4420", 00:14:41.679 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:41.679 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:41.679 "hdgst": false, 00:14:41.679 "ddgst": false 00:14:41.679 }, 00:14:41.679 "method": "bdev_nvme_attach_controller" 00:14:41.679 }' 00:14:41.679 [2024-07-15 14:49:15.520014] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:14:41.679 [2024-07-15 14:49:15.520061] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2825903 ] 00:14:41.679 EAL: No free 2048 kB hugepages reported on node 1 00:14:41.679 [2024-07-15 14:49:15.576573] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:41.935 [2024-07-15 14:49:15.653213] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:41.935 [2024-07-15 14:49:15.653311] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:41.935 [2024-07-15 14:49:15.653313] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:41.935 I/O targets: 00:14:41.935 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:14:41.935 00:14:41.935 00:14:41.935 CUnit - A unit testing framework for C - Version 2.1-3 00:14:41.935 http://cunit.sourceforge.net/ 00:14:41.935 00:14:41.935 00:14:41.935 Suite: bdevio tests on: Nvme1n1 00:14:41.935 Test: blockdev write read block ...passed 00:14:41.935 Test: blockdev write zeroes read block ...passed 00:14:41.935 Test: blockdev write zeroes read no split ...passed 00:14:41.935 Test: blockdev write zeroes read split ...passed 00:14:41.935 Test: blockdev write zeroes read split partial ...passed 00:14:41.935 Test: blockdev reset ...[2024-07-15 14:49:15.854869] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:42.191 [2024-07-15 14:49:15.877697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:14:42.191 [2024-07-15 14:49:15.904274] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:42.191 passed 00:14:42.191 Test: blockdev write read 8 blocks ...passed 00:14:42.191 Test: blockdev write read size > 128k ...passed 00:14:42.191 Test: blockdev write read invalid size ...passed 00:14:42.191 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:42.191 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:42.191 Test: blockdev write read max offset ...passed 00:14:42.191 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:42.191 Test: blockdev writev readv 8 blocks ...passed 00:14:42.191 Test: blockdev writev readv 30 x 1block ...passed 00:14:42.191 Test: blockdev writev readv block ...passed 00:14:42.191 Test: blockdev writev readv size > 128k ...passed 00:14:42.191 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:42.191 Test: blockdev comparev and writev ...[2024-07-15 14:49:15.907297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:42.191 [2024-07-15 14:49:15.907323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:42.191 [2024-07-15 14:49:15.907332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:42.191 [2024-07-15 14:49:15.907339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:14:42.191 [2024-07-15 14:49:15.907507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:42.191 [2024-07-15 14:49:15.907516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:14:42.191 [2024-07-15 14:49:15.907524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:42.191 [2024-07-15 14:49:15.907531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:14:42.191 [2024-07-15 14:49:15.907702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:42.191 [2024-07-15 14:49:15.907710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:14:42.191 [2024-07-15 14:49:15.907718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:42.191 [2024-07-15 14:49:15.907724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:14:42.191 [2024-07-15 14:49:15.907882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:42.191 [2024-07-15 14:49:15.907890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:14:42.191 [2024-07-15 14:49:15.907898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:42.191 [2024-07-15 14:49:15.907904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:14:42.191 passed 00:14:42.191 Test: blockdev nvme passthru rw ...passed 00:14:42.191 Test: blockdev nvme passthru vendor specific ...[2024-07-15 14:49:15.908153] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:14:42.191 [2024-07-15 14:49:15.908163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:14:42.191 [2024-07-15 14:49:15.908198] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:14:42.191 [2024-07-15 14:49:15.908205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:14:42.191 [2024-07-15 14:49:15.908249] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:14:42.191 [2024-07-15 14:49:15.908256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:14:42.191 [2024-07-15 14:49:15.908296] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:14:42.191 [2024-07-15 14:49:15.908303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:14:42.191 passed 00:14:42.192 Test: blockdev nvme admin passthru ...passed 00:14:42.192 Test: blockdev copy ...passed 00:14:42.192 00:14:42.192 Run Summary: Type Total Ran Passed Failed Inactive 00:14:42.192 suites 1 1 n/a 0 0 00:14:42.192 tests 23 23 23 0 0 00:14:42.192 asserts 152 152 152 0 n/a 00:14:42.192 00:14:42.192 Elapsed time = 0.171 seconds 00:14:42.192 14:49:16 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:42.192 14:49:16 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.192 14:49:16 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:42.192 14:49:16 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.192 14:49:16 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:14:42.192 14:49:16 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:14:42.192 14:49:16 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:42.192 14:49:16 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:14:42.448 14:49:16 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:14:42.448 14:49:16 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:14:42.448 14:49:16 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:14:42.448 14:49:16 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:42.449 14:49:16 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:14:42.449 rmmod nvme_rdma 00:14:42.449 rmmod nvme_fabrics 00:14:42.449 14:49:16 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:42.449 14:49:16 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:14:42.449 14:49:16 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:14:42.449 14:49:16 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 2825653 ']' 00:14:42.449 14:49:16 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 2825653 00:14:42.449 14:49:16 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 2825653 ']' 00:14:42.449 14:49:16 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 2825653 00:14:42.449 14:49:16 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:14:42.449 14:49:16 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:42.449 14:49:16 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2825653 00:14:42.449 14:49:16 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:14:42.449 14:49:16 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:14:42.449 14:49:16 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2825653' 00:14:42.449 killing process with pid 2825653 00:14:42.449 14:49:16 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 2825653 00:14:42.449 14:49:16 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 2825653 00:14:42.706 14:49:16 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:42.706 14:49:16 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:14:42.706 00:14:42.706 real 0m7.402s 00:14:42.706 user 0m10.155s 00:14:42.706 sys 0m4.480s 00:14:42.706 14:49:16 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:42.706 14:49:16 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:42.706 ************************************ 00:14:42.706 END TEST nvmf_bdevio 00:14:42.706 ************************************ 00:14:42.706 14:49:16 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:14:42.706 14:49:16 nvmf_rdma -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:14:42.706 14:49:16 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:42.706 14:49:16 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:42.706 14:49:16 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:14:42.706 ************************************ 00:14:42.706 START TEST nvmf_auth_target 00:14:42.706 ************************************ 00:14:42.706 14:49:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:14:42.706 * Looking for test storage... 00:14:42.706 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:42.706 14:49:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:42.706 14:49:16 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:14:42.706 14:49:16 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:42.706 14:49:16 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:42.706 14:49:16 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:42.706 14:49:16 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:42.706 14:49:16 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:42.706 14:49:16 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:42.706 14:49:16 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:42.706 14:49:16 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:42.706 14:49:16 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:42.706 14:49:16 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:42.963 14:49:16 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:14:42.963 14:49:16 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:14:42.963 14:49:16 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:42.964 14:49:16 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:42.964 14:49:16 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:42.964 14:49:16 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:42.964 14:49:16 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:42.964 14:49:16 nvmf_rdma.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:42.964 14:49:16 nvmf_rdma.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:42.964 14:49:16 nvmf_rdma.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:42.964 14:49:16 nvmf_rdma.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.964 14:49:16 nvmf_rdma.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.964 14:49:16 nvmf_rdma.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.964 14:49:16 nvmf_rdma.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:14:42.964 14:49:16 nvmf_rdma.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.964 14:49:16 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:14:42.964 14:49:16 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:42.964 14:49:16 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:42.964 14:49:16 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:42.964 14:49:16 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:42.964 14:49:16 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:42.964 14:49:16 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:42.964 14:49:16 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:42.964 14:49:16 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:42.964 14:49:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:14:42.964 14:49:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:14:42.964 14:49:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:14:42.964 14:49:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:14:42.964 14:49:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:14:42.964 14:49:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:14:42.964 14:49:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:14:42.964 14:49:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:14:42.964 14:49:16 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:14:42.964 14:49:16 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:42.964 14:49:16 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:42.964 14:49:16 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:42.964 14:49:16 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:42.964 14:49:16 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:42.964 14:49:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:42.964 14:49:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:42.964 14:49:16 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:42.964 14:49:16 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:42.964 14:49:16 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:14:42.964 14:49:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.226 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:48.226 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:14:48.226 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:48.226 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:48.226 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:48.226 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:48.226 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:48.226 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:14:48.226 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:48.226 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:14:48.226 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:14:48.226 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:14:48.226 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:14:48.226 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:14:48.226 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:14:48.226 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:48.226 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:48.226 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:48.226 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:48.226 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:48.226 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:48.226 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:48.226 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:48.226 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:48.226 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:48.226 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:48.226 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:48.226 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:14:48.226 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:14:48.226 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:14:48.226 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:14:48.226 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:14:48.226 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:48.226 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:48.226 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:14:48.226 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:14:48.226 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:48.226 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:48.226 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:48.226 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:48.226 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:48.226 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:48.226 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:48.226 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:14:48.226 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:14:48.226 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:48.226 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:14:48.227 Found net devices under 0000:da:00.0: mlx_0_0 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:14:48.227 Found net devices under 0000:da:00.1: mlx_0_1 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@420 -- # rdma_device_init 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@58 -- # uname 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@62 -- # modprobe ib_cm 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@63 -- # modprobe ib_core 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@64 -- # modprobe ib_umad 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@66 -- # modprobe iw_cm 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@502 -- # allocate_nic_ips 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@73 -- # get_rdma_if_list 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@105 -- # continue 2 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@105 -- # continue 2 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:14:48.227 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:48.227 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:14:48.227 altname enp218s0f0np0 00:14:48.227 altname ens818f0np0 00:14:48.227 inet 192.168.100.8/24 scope global mlx_0_0 00:14:48.227 valid_lft forever preferred_lft forever 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:14:48.227 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:48.227 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:14:48.227 altname enp218s0f1np1 00:14:48.227 altname ens818f1np1 00:14:48.227 inet 192.168.100.9/24 scope global mlx_0_1 00:14:48.227 valid_lft forever preferred_lft forever 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@86 -- # get_rdma_if_list 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@105 -- # continue 2 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@105 -- # continue 2 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:48.227 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:48.228 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:48.228 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:48.228 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:14:48.228 192.168.100.9' 00:14:48.228 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:14:48.228 192.168.100.9' 00:14:48.228 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@457 -- # head -n 1 00:14:48.228 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:48.228 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:14:48.228 192.168.100.9' 00:14:48.228 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@458 -- # tail -n +2 00:14:48.228 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@458 -- # head -n 1 00:14:48.228 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:48.228 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:14:48.228 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:48.228 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:14:48.228 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:14:48.228 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:14:48.228 14:49:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:14:48.228 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:48.228 14:49:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:48.228 14:49:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.228 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2828973 00:14:48.228 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2828973 00:14:48.228 14:49:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:14:48.228 14:49:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2828973 ']' 00:14:48.228 14:49:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:48.228 14:49:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:48.228 14:49:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:48.228 14:49:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:48.228 14:49:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.164 14:49:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:49.164 14:49:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:14:49.164 14:49:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:49.164 14:49:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:49.164 14:49:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.164 14:49:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:49.164 14:49:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=2829220 00:14:49.164 14:49:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:14:49.164 14:49:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:14:49.164 14:49:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:14:49.164 14:49:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:49.164 14:49:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:49.164 14:49:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:49.164 14:49:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:14:49.164 14:49:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:14:49.164 14:49:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:49.164 14:49:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a8d90bf73be40076177d501b8909a34f7275b3b7103ba84f 00:14:49.164 14:49:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:14:49.164 14:49:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.6EH 00:14:49.164 14:49:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a8d90bf73be40076177d501b8909a34f7275b3b7103ba84f 0 00:14:49.164 14:49:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a8d90bf73be40076177d501b8909a34f7275b3b7103ba84f 0 00:14:49.164 14:49:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:49.164 14:49:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:49.164 14:49:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a8d90bf73be40076177d501b8909a34f7275b3b7103ba84f 00:14:49.164 14:49:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:14:49.164 14:49:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:49.164 14:49:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.6EH 00:14:49.164 14:49:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.6EH 00:14:49.164 14:49:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.6EH 00:14:49.164 14:49:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:14:49.164 14:49:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:49.164 14:49:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:49.164 14:49:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:49.164 14:49:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:14:49.164 14:49:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:14:49.164 14:49:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:49.164 14:49:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=6390a707909927dfe2978a0ce3d92dcfd44a55cbad0906b3f3137d44b2e59843 00:14:49.164 14:49:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:14:49.164 14:49:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.lPa 00:14:49.164 14:49:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 6390a707909927dfe2978a0ce3d92dcfd44a55cbad0906b3f3137d44b2e59843 3 00:14:49.164 14:49:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 6390a707909927dfe2978a0ce3d92dcfd44a55cbad0906b3f3137d44b2e59843 3 00:14:49.164 14:49:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:49.164 14:49:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:49.164 14:49:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=6390a707909927dfe2978a0ce3d92dcfd44a55cbad0906b3f3137d44b2e59843 00:14:49.164 14:49:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:14:49.164 14:49:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:49.164 14:49:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.lPa 00:14:49.164 14:49:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.lPa 00:14:49.164 14:49:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.lPa 00:14:49.164 14:49:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:14:49.164 14:49:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:49.164 14:49:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:49.164 14:49:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:49.164 14:49:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:14:49.164 14:49:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:14:49.164 14:49:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:49.164 14:49:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=ddefa236a5b657a477eee662b1738e82 00:14:49.164 14:49:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:14:49.164 14:49:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.jcV 00:14:49.164 14:49:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key ddefa236a5b657a477eee662b1738e82 1 00:14:49.164 14:49:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 ddefa236a5b657a477eee662b1738e82 1 00:14:49.164 14:49:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:49.164 14:49:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:49.165 14:49:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=ddefa236a5b657a477eee662b1738e82 00:14:49.165 14:49:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:14:49.165 14:49:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:49.165 14:49:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.jcV 00:14:49.165 14:49:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.jcV 00:14:49.165 14:49:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.jcV 00:14:49.165 14:49:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:14:49.165 14:49:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:49.165 14:49:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:49.165 14:49:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:49.165 14:49:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:14:49.165 14:49:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:14:49.165 14:49:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:49.165 14:49:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=f91d356e4e85ef1cd3914cfc8ed9662b8454f2331e50e51b 00:14:49.165 14:49:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:14:49.165 14:49:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.nna 00:14:49.165 14:49:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key f91d356e4e85ef1cd3914cfc8ed9662b8454f2331e50e51b 2 00:14:49.165 14:49:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 f91d356e4e85ef1cd3914cfc8ed9662b8454f2331e50e51b 2 00:14:49.165 14:49:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:49.165 14:49:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:49.165 14:49:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=f91d356e4e85ef1cd3914cfc8ed9662b8454f2331e50e51b 00:14:49.165 14:49:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:14:49.165 14:49:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:49.165 14:49:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.nna 00:14:49.165 14:49:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.nna 00:14:49.165 14:49:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.nna 00:14:49.165 14:49:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:14:49.165 14:49:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:49.165 14:49:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:49.165 14:49:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:49.165 14:49:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:14:49.165 14:49:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:14:49.165 14:49:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:49.165 14:49:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=9e526935fe4ddd2c2c3eca2dfa528b16517f1ed9b0f85884 00:14:49.165 14:49:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:14:49.165 14:49:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.vky 00:14:49.165 14:49:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 9e526935fe4ddd2c2c3eca2dfa528b16517f1ed9b0f85884 2 00:14:49.165 14:49:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 9e526935fe4ddd2c2c3eca2dfa528b16517f1ed9b0f85884 2 00:14:49.165 14:49:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:49.165 14:49:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:49.165 14:49:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=9e526935fe4ddd2c2c3eca2dfa528b16517f1ed9b0f85884 00:14:49.165 14:49:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:14:49.165 14:49:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:49.165 14:49:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.vky 00:14:49.165 14:49:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.vky 00:14:49.165 14:49:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.vky 00:14:49.165 14:49:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:14:49.165 14:49:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:49.165 14:49:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:49.165 14:49:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:49.165 14:49:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:14:49.165 14:49:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:14:49.165 14:49:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:49.165 14:49:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=86f20a948943954b4cc9115ed9363b08 00:14:49.165 14:49:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:14:49.165 14:49:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.LBM 00:14:49.165 14:49:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 86f20a948943954b4cc9115ed9363b08 1 00:14:49.165 14:49:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 86f20a948943954b4cc9115ed9363b08 1 00:14:49.165 14:49:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:49.165 14:49:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:49.165 14:49:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=86f20a948943954b4cc9115ed9363b08 00:14:49.165 14:49:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:14:49.165 14:49:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:49.424 14:49:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.LBM 00:14:49.424 14:49:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.LBM 00:14:49.424 14:49:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.LBM 00:14:49.424 14:49:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:14:49.424 14:49:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:49.424 14:49:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:49.424 14:49:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:49.424 14:49:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:14:49.424 14:49:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:14:49.424 14:49:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:49.424 14:49:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=8082a5deb4a6a8d4a57e056717d1495c2adbe9dfc5c682ef6ad85393c05d172f 00:14:49.424 14:49:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:14:49.424 14:49:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.v8r 00:14:49.424 14:49:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 8082a5deb4a6a8d4a57e056717d1495c2adbe9dfc5c682ef6ad85393c05d172f 3 00:14:49.424 14:49:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 8082a5deb4a6a8d4a57e056717d1495c2adbe9dfc5c682ef6ad85393c05d172f 3 00:14:49.424 14:49:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:49.424 14:49:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:49.424 14:49:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=8082a5deb4a6a8d4a57e056717d1495c2adbe9dfc5c682ef6ad85393c05d172f 00:14:49.424 14:49:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:14:49.424 14:49:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:49.424 14:49:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.v8r 00:14:49.424 14:49:23 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.v8r 00:14:49.424 14:49:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.v8r 00:14:49.424 14:49:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:14:49.424 14:49:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 2828973 00:14:49.424 14:49:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2828973 ']' 00:14:49.424 14:49:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:49.424 14:49:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:49.424 14:49:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:49.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:49.424 14:49:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:49.424 14:49:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.682 14:49:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:49.682 14:49:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:14:49.682 14:49:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 2829220 /var/tmp/host.sock 00:14:49.682 14:49:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2829220 ']' 00:14:49.682 14:49:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:14:49.682 14:49:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:49.682 14:49:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:49.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:49.682 14:49:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:49.682 14:49:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.682 14:49:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:49.682 14:49:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:14:49.682 14:49:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:14:49.682 14:49:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.682 14:49:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.941 14:49:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.941 14:49:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:14:49.941 14:49:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.6EH 00:14:49.941 14:49:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.941 14:49:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.941 14:49:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.941 14:49:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.6EH 00:14:49.941 14:49:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.6EH 00:14:49.941 14:49:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.lPa ]] 00:14:49.941 14:49:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.lPa 00:14:49.941 14:49:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.941 14:49:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.941 14:49:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.941 14:49:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.lPa 00:14:49.941 14:49:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.lPa 00:14:50.199 14:49:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:14:50.199 14:49:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.jcV 00:14:50.199 14:49:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.199 14:49:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.199 14:49:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.199 14:49:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.jcV 00:14:50.199 14:49:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.jcV 00:14:50.457 14:49:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.nna ]] 00:14:50.457 14:49:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.nna 00:14:50.457 14:49:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.457 14:49:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.457 14:49:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.457 14:49:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.nna 00:14:50.457 14:49:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.nna 00:14:50.457 14:49:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:14:50.457 14:49:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.vky 00:14:50.457 14:49:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.457 14:49:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.457 14:49:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.457 14:49:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.vky 00:14:50.457 14:49:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.vky 00:14:50.716 14:49:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.LBM ]] 00:14:50.716 14:49:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.LBM 00:14:50.716 14:49:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.716 14:49:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.716 14:49:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.716 14:49:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.LBM 00:14:50.716 14:49:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.LBM 00:14:50.975 14:49:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:14:50.975 14:49:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.v8r 00:14:50.975 14:49:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.975 14:49:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.975 14:49:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.975 14:49:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.v8r 00:14:50.975 14:49:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.v8r 00:14:50.975 14:49:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:14:50.975 14:49:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:14:50.975 14:49:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:50.975 14:49:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:50.975 14:49:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:50.975 14:49:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:51.233 14:49:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:14:51.233 14:49:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:51.233 14:49:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:51.233 14:49:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:51.233 14:49:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:51.233 14:49:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:51.233 14:49:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:51.233 14:49:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.233 14:49:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.233 14:49:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.233 14:49:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:51.233 14:49:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:51.492 00:14:51.492 14:49:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:51.492 14:49:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:51.492 14:49:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:51.750 14:49:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:51.750 14:49:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:51.750 14:49:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.750 14:49:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.751 14:49:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.751 14:49:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:51.751 { 00:14:51.751 "cntlid": 1, 00:14:51.751 "qid": 0, 00:14:51.751 "state": "enabled", 00:14:51.751 "thread": "nvmf_tgt_poll_group_000", 00:14:51.751 "listen_address": { 00:14:51.751 "trtype": "RDMA", 00:14:51.751 "adrfam": "IPv4", 00:14:51.751 "traddr": "192.168.100.8", 00:14:51.751 "trsvcid": "4420" 00:14:51.751 }, 00:14:51.751 "peer_address": { 00:14:51.751 "trtype": "RDMA", 00:14:51.751 "adrfam": "IPv4", 00:14:51.751 "traddr": "192.168.100.8", 00:14:51.751 "trsvcid": "39195" 00:14:51.751 }, 00:14:51.751 "auth": { 00:14:51.751 "state": "completed", 00:14:51.751 "digest": "sha256", 00:14:51.751 "dhgroup": "null" 00:14:51.751 } 00:14:51.751 } 00:14:51.751 ]' 00:14:51.751 14:49:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:51.751 14:49:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:51.751 14:49:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:51.751 14:49:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:51.751 14:49:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:51.751 14:49:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:51.751 14:49:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:51.751 14:49:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:52.009 14:49:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:YThkOTBiZjczYmU0MDA3NjE3N2Q1MDFiODkwOWEzNGY3Mjc1YjNiNzEwM2JhODRm77kZEg==: --dhchap-ctrl-secret DHHC-1:03:NjM5MGE3MDc5MDk5MjdkZmUyOTc4YTBjZTNkOTJkY2ZkNDRhNTVjYmFkMDkwNmIzZjMxMzdkNDRiMmU1OTg0M5gPKx4=: 00:14:52.577 14:49:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:52.836 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:52.836 14:49:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:14:52.836 14:49:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.836 14:49:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.836 14:49:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.836 14:49:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:52.836 14:49:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:52.836 14:49:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:52.836 14:49:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:14:52.836 14:49:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:52.836 14:49:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:52.836 14:49:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:52.836 14:49:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:52.836 14:49:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:52.836 14:49:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:52.836 14:49:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.836 14:49:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.836 14:49:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.836 14:49:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:52.836 14:49:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:53.094 00:14:53.094 14:49:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:53.094 14:49:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:53.094 14:49:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:53.353 14:49:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:53.353 14:49:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:53.353 14:49:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:53.353 14:49:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.353 14:49:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:53.353 14:49:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:53.353 { 00:14:53.353 "cntlid": 3, 00:14:53.353 "qid": 0, 00:14:53.353 "state": "enabled", 00:14:53.353 "thread": "nvmf_tgt_poll_group_000", 00:14:53.353 "listen_address": { 00:14:53.353 "trtype": "RDMA", 00:14:53.353 "adrfam": "IPv4", 00:14:53.353 "traddr": "192.168.100.8", 00:14:53.353 "trsvcid": "4420" 00:14:53.353 }, 00:14:53.353 "peer_address": { 00:14:53.353 "trtype": "RDMA", 00:14:53.353 "adrfam": "IPv4", 00:14:53.353 "traddr": "192.168.100.8", 00:14:53.353 "trsvcid": "46596" 00:14:53.353 }, 00:14:53.353 "auth": { 00:14:53.353 "state": "completed", 00:14:53.353 "digest": "sha256", 00:14:53.353 "dhgroup": "null" 00:14:53.353 } 00:14:53.353 } 00:14:53.353 ]' 00:14:53.353 14:49:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:53.353 14:49:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:53.353 14:49:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:53.353 14:49:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:53.353 14:49:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:53.612 14:49:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:53.612 14:49:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:53.612 14:49:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:53.612 14:49:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZGRlZmEyMzZhNWI2NTdhNDc3ZWVlNjYyYjE3MzhlODLmSnbZ: --dhchap-ctrl-secret DHHC-1:02:ZjkxZDM1NmU0ZTg1ZWYxY2QzOTE0Y2ZjOGVkOTY2MmI4NDU0ZjIzMzFlNTBlNTFiwS9w/Q==: 00:14:54.179 14:49:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:54.437 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:54.437 14:49:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:14:54.437 14:49:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.437 14:49:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.437 14:49:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.437 14:49:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:54.437 14:49:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:54.437 14:49:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:54.696 14:49:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:14:54.696 14:49:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:54.696 14:49:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:54.696 14:49:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:54.696 14:49:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:54.696 14:49:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:54.696 14:49:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:54.696 14:49:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.696 14:49:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.696 14:49:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.696 14:49:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:54.696 14:49:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:54.696 00:14:54.696 14:49:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:54.696 14:49:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:54.696 14:49:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:54.954 14:49:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:54.954 14:49:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:54.954 14:49:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.954 14:49:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.954 14:49:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.954 14:49:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:54.954 { 00:14:54.954 "cntlid": 5, 00:14:54.954 "qid": 0, 00:14:54.954 "state": "enabled", 00:14:54.954 "thread": "nvmf_tgt_poll_group_000", 00:14:54.954 "listen_address": { 00:14:54.954 "trtype": "RDMA", 00:14:54.954 "adrfam": "IPv4", 00:14:54.954 "traddr": "192.168.100.8", 00:14:54.954 "trsvcid": "4420" 00:14:54.954 }, 00:14:54.954 "peer_address": { 00:14:54.954 "trtype": "RDMA", 00:14:54.954 "adrfam": "IPv4", 00:14:54.955 "traddr": "192.168.100.8", 00:14:54.955 "trsvcid": "52516" 00:14:54.955 }, 00:14:54.955 "auth": { 00:14:54.955 "state": "completed", 00:14:54.955 "digest": "sha256", 00:14:54.955 "dhgroup": "null" 00:14:54.955 } 00:14:54.955 } 00:14:54.955 ]' 00:14:54.955 14:49:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:54.955 14:49:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:54.955 14:49:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:54.955 14:49:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:54.955 14:49:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:55.213 14:49:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:55.213 14:49:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:55.213 14:49:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:55.213 14:49:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:OWU1MjY5MzVmZTRkZGQyYzJjM2VjYTJkZmE1MjhiMTY1MTdmMWVkOWIwZjg1ODg0HgnKFQ==: --dhchap-ctrl-secret DHHC-1:01:ODZmMjBhOTQ4OTQzOTU0YjRjYzkxMTVlZDkzNjNiMDh6+1KJ: 00:14:55.777 14:49:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:56.035 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:56.035 14:49:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:14:56.035 14:49:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.035 14:49:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.035 14:49:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:56.035 14:49:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:56.035 14:49:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:56.035 14:49:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:56.293 14:49:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:14:56.293 14:49:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:56.293 14:49:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:56.293 14:49:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:56.293 14:49:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:56.293 14:49:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:56.293 14:49:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:14:56.293 14:49:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.293 14:49:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.293 14:49:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:56.293 14:49:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:56.293 14:49:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:56.293 00:14:56.551 14:49:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:56.551 14:49:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:56.551 14:49:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:56.551 14:49:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:56.551 14:49:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:56.551 14:49:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.551 14:49:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.551 14:49:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:56.551 14:49:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:56.551 { 00:14:56.551 "cntlid": 7, 00:14:56.551 "qid": 0, 00:14:56.551 "state": "enabled", 00:14:56.551 "thread": "nvmf_tgt_poll_group_000", 00:14:56.551 "listen_address": { 00:14:56.551 "trtype": "RDMA", 00:14:56.551 "adrfam": "IPv4", 00:14:56.551 "traddr": "192.168.100.8", 00:14:56.551 "trsvcid": "4420" 00:14:56.551 }, 00:14:56.551 "peer_address": { 00:14:56.551 "trtype": "RDMA", 00:14:56.551 "adrfam": "IPv4", 00:14:56.551 "traddr": "192.168.100.8", 00:14:56.551 "trsvcid": "60187" 00:14:56.551 }, 00:14:56.551 "auth": { 00:14:56.551 "state": "completed", 00:14:56.551 "digest": "sha256", 00:14:56.551 "dhgroup": "null" 00:14:56.551 } 00:14:56.551 } 00:14:56.551 ]' 00:14:56.551 14:49:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:56.551 14:49:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:56.551 14:49:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:56.810 14:49:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:56.810 14:49:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:56.810 14:49:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:56.810 14:49:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:56.810 14:49:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:56.810 14:49:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:ODA4MmE1ZGViNGE2YThkNGE1N2UwNTY3MTdkMTQ5NWMyYWRiZTlkZmM1YzY4MmVmNmFkODUzOTNjMDVkMTcyZjoJbsE=: 00:14:57.744 14:49:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:57.744 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:57.744 14:49:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:14:57.744 14:49:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:57.744 14:49:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.744 14:49:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:57.744 14:49:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:57.744 14:49:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:57.744 14:49:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:57.744 14:49:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:57.744 14:49:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:14:57.744 14:49:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:57.744 14:49:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:57.744 14:49:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:57.744 14:49:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:57.744 14:49:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:57.744 14:49:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:57.744 14:49:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:57.744 14:49:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.744 14:49:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:57.744 14:49:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:57.744 14:49:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:58.002 00:14:58.003 14:49:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:58.003 14:49:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:58.003 14:49:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:58.261 14:49:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:58.261 14:49:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:58.261 14:49:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.261 14:49:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.261 14:49:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.261 14:49:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:58.261 { 00:14:58.261 "cntlid": 9, 00:14:58.261 "qid": 0, 00:14:58.261 "state": "enabled", 00:14:58.261 "thread": "nvmf_tgt_poll_group_000", 00:14:58.261 "listen_address": { 00:14:58.261 "trtype": "RDMA", 00:14:58.261 "adrfam": "IPv4", 00:14:58.261 "traddr": "192.168.100.8", 00:14:58.261 "trsvcid": "4420" 00:14:58.261 }, 00:14:58.261 "peer_address": { 00:14:58.261 "trtype": "RDMA", 00:14:58.261 "adrfam": "IPv4", 00:14:58.261 "traddr": "192.168.100.8", 00:14:58.261 "trsvcid": "49267" 00:14:58.261 }, 00:14:58.261 "auth": { 00:14:58.261 "state": "completed", 00:14:58.261 "digest": "sha256", 00:14:58.261 "dhgroup": "ffdhe2048" 00:14:58.261 } 00:14:58.261 } 00:14:58.261 ]' 00:14:58.261 14:49:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:58.261 14:49:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:58.261 14:49:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:58.261 14:49:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:58.261 14:49:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:58.261 14:49:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:58.261 14:49:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:58.261 14:49:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:58.520 14:49:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:YThkOTBiZjczYmU0MDA3NjE3N2Q1MDFiODkwOWEzNGY3Mjc1YjNiNzEwM2JhODRm77kZEg==: --dhchap-ctrl-secret DHHC-1:03:NjM5MGE3MDc5MDk5MjdkZmUyOTc4YTBjZTNkOTJkY2ZkNDRhNTVjYmFkMDkwNmIzZjMxMzdkNDRiMmU1OTg0M5gPKx4=: 00:14:59.085 14:49:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:59.342 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:59.342 14:49:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:14:59.342 14:49:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.342 14:49:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.342 14:49:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.342 14:49:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:59.342 14:49:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:59.342 14:49:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:59.600 14:49:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:14:59.600 14:49:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:59.600 14:49:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:59.600 14:49:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:59.600 14:49:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:59.600 14:49:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:59.600 14:49:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:59.600 14:49:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.600 14:49:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.600 14:49:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.600 14:49:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:59.600 14:49:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:59.600 00:14:59.600 14:49:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:59.600 14:49:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:59.600 14:49:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:59.858 14:49:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:59.858 14:49:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:59.858 14:49:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.858 14:49:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.858 14:49:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.858 14:49:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:59.858 { 00:14:59.858 "cntlid": 11, 00:14:59.858 "qid": 0, 00:14:59.858 "state": "enabled", 00:14:59.858 "thread": "nvmf_tgt_poll_group_000", 00:14:59.858 "listen_address": { 00:14:59.858 "trtype": "RDMA", 00:14:59.858 "adrfam": "IPv4", 00:14:59.858 "traddr": "192.168.100.8", 00:14:59.858 "trsvcid": "4420" 00:14:59.858 }, 00:14:59.858 "peer_address": { 00:14:59.858 "trtype": "RDMA", 00:14:59.858 "adrfam": "IPv4", 00:14:59.858 "traddr": "192.168.100.8", 00:14:59.858 "trsvcid": "58932" 00:14:59.858 }, 00:14:59.858 "auth": { 00:14:59.858 "state": "completed", 00:14:59.858 "digest": "sha256", 00:14:59.858 "dhgroup": "ffdhe2048" 00:14:59.858 } 00:14:59.858 } 00:14:59.858 ]' 00:14:59.858 14:49:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:59.858 14:49:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:59.858 14:49:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:59.858 14:49:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:59.858 14:49:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:00.116 14:49:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:00.116 14:49:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:00.116 14:49:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:00.116 14:49:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZGRlZmEyMzZhNWI2NTdhNDc3ZWVlNjYyYjE3MzhlODLmSnbZ: --dhchap-ctrl-secret DHHC-1:02:ZjkxZDM1NmU0ZTg1ZWYxY2QzOTE0Y2ZjOGVkOTY2MmI4NDU0ZjIzMzFlNTBlNTFiwS9w/Q==: 00:15:00.681 14:49:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:00.937 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:00.937 14:49:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:00.937 14:49:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:00.937 14:49:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.937 14:49:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:00.937 14:49:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:00.937 14:49:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:00.937 14:49:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:01.194 14:49:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:15:01.194 14:49:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:01.194 14:49:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:01.194 14:49:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:01.194 14:49:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:01.194 14:49:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:01.194 14:49:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:01.194 14:49:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.194 14:49:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.194 14:49:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.194 14:49:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:01.194 14:49:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:01.451 00:15:01.451 14:49:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:01.451 14:49:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:01.451 14:49:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:01.451 14:49:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:01.451 14:49:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:01.451 14:49:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.451 14:49:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.451 14:49:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.451 14:49:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:01.451 { 00:15:01.451 "cntlid": 13, 00:15:01.451 "qid": 0, 00:15:01.451 "state": "enabled", 00:15:01.451 "thread": "nvmf_tgt_poll_group_000", 00:15:01.451 "listen_address": { 00:15:01.451 "trtype": "RDMA", 00:15:01.451 "adrfam": "IPv4", 00:15:01.451 "traddr": "192.168.100.8", 00:15:01.451 "trsvcid": "4420" 00:15:01.451 }, 00:15:01.451 "peer_address": { 00:15:01.451 "trtype": "RDMA", 00:15:01.451 "adrfam": "IPv4", 00:15:01.451 "traddr": "192.168.100.8", 00:15:01.451 "trsvcid": "40698" 00:15:01.451 }, 00:15:01.451 "auth": { 00:15:01.451 "state": "completed", 00:15:01.451 "digest": "sha256", 00:15:01.451 "dhgroup": "ffdhe2048" 00:15:01.451 } 00:15:01.451 } 00:15:01.451 ]' 00:15:01.451 14:49:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:01.708 14:49:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:01.708 14:49:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:01.708 14:49:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:01.708 14:49:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:01.708 14:49:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:01.708 14:49:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:01.708 14:49:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:01.965 14:49:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:OWU1MjY5MzVmZTRkZGQyYzJjM2VjYTJkZmE1MjhiMTY1MTdmMWVkOWIwZjg1ODg0HgnKFQ==: --dhchap-ctrl-secret DHHC-1:01:ODZmMjBhOTQ4OTQzOTU0YjRjYzkxMTVlZDkzNjNiMDh6+1KJ: 00:15:02.528 14:49:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:02.528 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:02.528 14:49:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:02.528 14:49:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.528 14:49:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.528 14:49:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.528 14:49:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:02.528 14:49:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:02.528 14:49:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:02.784 14:49:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:15:02.784 14:49:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:02.784 14:49:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:02.784 14:49:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:02.784 14:49:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:02.785 14:49:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:02.785 14:49:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:15:02.785 14:49:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.785 14:49:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.785 14:49:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.785 14:49:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:02.785 14:49:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:03.041 00:15:03.041 14:49:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:03.041 14:49:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:03.041 14:49:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:03.299 14:49:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:03.299 14:49:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:03.299 14:49:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.299 14:49:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.299 14:49:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.299 14:49:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:03.299 { 00:15:03.299 "cntlid": 15, 00:15:03.299 "qid": 0, 00:15:03.299 "state": "enabled", 00:15:03.299 "thread": "nvmf_tgt_poll_group_000", 00:15:03.299 "listen_address": { 00:15:03.299 "trtype": "RDMA", 00:15:03.299 "adrfam": "IPv4", 00:15:03.299 "traddr": "192.168.100.8", 00:15:03.299 "trsvcid": "4420" 00:15:03.299 }, 00:15:03.299 "peer_address": { 00:15:03.299 "trtype": "RDMA", 00:15:03.299 "adrfam": "IPv4", 00:15:03.299 "traddr": "192.168.100.8", 00:15:03.299 "trsvcid": "43832" 00:15:03.299 }, 00:15:03.299 "auth": { 00:15:03.299 "state": "completed", 00:15:03.299 "digest": "sha256", 00:15:03.299 "dhgroup": "ffdhe2048" 00:15:03.299 } 00:15:03.299 } 00:15:03.299 ]' 00:15:03.299 14:49:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:03.299 14:49:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:03.299 14:49:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:03.299 14:49:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:03.299 14:49:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:03.299 14:49:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:03.299 14:49:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:03.299 14:49:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:03.556 14:49:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:ODA4MmE1ZGViNGE2YThkNGE1N2UwNTY3MTdkMTQ5NWMyYWRiZTlkZmM1YzY4MmVmNmFkODUzOTNjMDVkMTcyZjoJbsE=: 00:15:04.121 14:49:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:04.121 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:04.121 14:49:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:04.121 14:49:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.121 14:49:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.121 14:49:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.121 14:49:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:04.121 14:49:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:04.121 14:49:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:04.121 14:49:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:04.397 14:49:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:15:04.397 14:49:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:04.397 14:49:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:04.397 14:49:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:04.397 14:49:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:04.397 14:49:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:04.397 14:49:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:04.397 14:49:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.397 14:49:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.397 14:49:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.397 14:49:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:04.397 14:49:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:04.702 00:15:04.702 14:49:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:04.702 14:49:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:04.702 14:49:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:04.998 14:49:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:04.998 14:49:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:04.998 14:49:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.998 14:49:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.998 14:49:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.998 14:49:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:04.998 { 00:15:04.998 "cntlid": 17, 00:15:04.998 "qid": 0, 00:15:04.998 "state": "enabled", 00:15:04.998 "thread": "nvmf_tgt_poll_group_000", 00:15:04.998 "listen_address": { 00:15:04.998 "trtype": "RDMA", 00:15:04.998 "adrfam": "IPv4", 00:15:04.998 "traddr": "192.168.100.8", 00:15:04.998 "trsvcid": "4420" 00:15:04.998 }, 00:15:04.998 "peer_address": { 00:15:04.998 "trtype": "RDMA", 00:15:04.998 "adrfam": "IPv4", 00:15:04.998 "traddr": "192.168.100.8", 00:15:04.998 "trsvcid": "40037" 00:15:04.998 }, 00:15:04.998 "auth": { 00:15:04.998 "state": "completed", 00:15:04.998 "digest": "sha256", 00:15:04.998 "dhgroup": "ffdhe3072" 00:15:04.998 } 00:15:04.998 } 00:15:04.998 ]' 00:15:04.998 14:49:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:04.998 14:49:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:04.998 14:49:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:04.998 14:49:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:04.998 14:49:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:04.998 14:49:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:04.998 14:49:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:04.998 14:49:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:05.256 14:49:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:YThkOTBiZjczYmU0MDA3NjE3N2Q1MDFiODkwOWEzNGY3Mjc1YjNiNzEwM2JhODRm77kZEg==: --dhchap-ctrl-secret DHHC-1:03:NjM5MGE3MDc5MDk5MjdkZmUyOTc4YTBjZTNkOTJkY2ZkNDRhNTVjYmFkMDkwNmIzZjMxMzdkNDRiMmU1OTg0M5gPKx4=: 00:15:05.823 14:49:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:05.823 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:05.823 14:49:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:05.823 14:49:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.823 14:49:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.823 14:49:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.823 14:49:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:05.823 14:49:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:05.823 14:49:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:06.081 14:49:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:15:06.081 14:49:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:06.081 14:49:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:06.081 14:49:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:06.081 14:49:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:06.081 14:49:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:06.081 14:49:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:06.081 14:49:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.081 14:49:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.081 14:49:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.081 14:49:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:06.081 14:49:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:06.339 00:15:06.339 14:49:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:06.339 14:49:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:06.339 14:49:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:06.597 14:49:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:06.597 14:49:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:06.597 14:49:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.597 14:49:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.597 14:49:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.597 14:49:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:06.597 { 00:15:06.597 "cntlid": 19, 00:15:06.597 "qid": 0, 00:15:06.597 "state": "enabled", 00:15:06.597 "thread": "nvmf_tgt_poll_group_000", 00:15:06.597 "listen_address": { 00:15:06.597 "trtype": "RDMA", 00:15:06.597 "adrfam": "IPv4", 00:15:06.597 "traddr": "192.168.100.8", 00:15:06.597 "trsvcid": "4420" 00:15:06.597 }, 00:15:06.597 "peer_address": { 00:15:06.597 "trtype": "RDMA", 00:15:06.597 "adrfam": "IPv4", 00:15:06.597 "traddr": "192.168.100.8", 00:15:06.597 "trsvcid": "58373" 00:15:06.597 }, 00:15:06.597 "auth": { 00:15:06.597 "state": "completed", 00:15:06.597 "digest": "sha256", 00:15:06.597 "dhgroup": "ffdhe3072" 00:15:06.597 } 00:15:06.597 } 00:15:06.597 ]' 00:15:06.597 14:49:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:06.597 14:49:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:06.597 14:49:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:06.597 14:49:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:06.597 14:49:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:06.597 14:49:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:06.597 14:49:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:06.597 14:49:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:06.855 14:49:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZGRlZmEyMzZhNWI2NTdhNDc3ZWVlNjYyYjE3MzhlODLmSnbZ: --dhchap-ctrl-secret DHHC-1:02:ZjkxZDM1NmU0ZTg1ZWYxY2QzOTE0Y2ZjOGVkOTY2MmI4NDU0ZjIzMzFlNTBlNTFiwS9w/Q==: 00:15:07.421 14:49:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:07.679 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:07.679 14:49:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:07.679 14:49:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.679 14:49:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.679 14:49:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.679 14:49:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:07.679 14:49:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:07.679 14:49:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:07.679 14:49:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:15:07.679 14:49:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:07.679 14:49:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:07.679 14:49:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:07.679 14:49:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:07.679 14:49:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:07.679 14:49:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:07.679 14:49:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.679 14:49:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.679 14:49:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.679 14:49:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:07.679 14:49:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:07.937 00:15:07.937 14:49:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:07.937 14:49:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:07.937 14:49:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:08.196 14:49:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:08.196 14:49:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:08.196 14:49:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.196 14:49:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.196 14:49:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.196 14:49:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:08.196 { 00:15:08.196 "cntlid": 21, 00:15:08.196 "qid": 0, 00:15:08.196 "state": "enabled", 00:15:08.196 "thread": "nvmf_tgt_poll_group_000", 00:15:08.196 "listen_address": { 00:15:08.196 "trtype": "RDMA", 00:15:08.196 "adrfam": "IPv4", 00:15:08.196 "traddr": "192.168.100.8", 00:15:08.196 "trsvcid": "4420" 00:15:08.196 }, 00:15:08.196 "peer_address": { 00:15:08.196 "trtype": "RDMA", 00:15:08.196 "adrfam": "IPv4", 00:15:08.196 "traddr": "192.168.100.8", 00:15:08.196 "trsvcid": "40213" 00:15:08.196 }, 00:15:08.196 "auth": { 00:15:08.196 "state": "completed", 00:15:08.196 "digest": "sha256", 00:15:08.196 "dhgroup": "ffdhe3072" 00:15:08.196 } 00:15:08.196 } 00:15:08.196 ]' 00:15:08.196 14:49:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:08.196 14:49:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:08.196 14:49:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:08.196 14:49:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:08.196 14:49:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:08.454 14:49:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:08.454 14:49:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:08.454 14:49:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:08.454 14:49:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:OWU1MjY5MzVmZTRkZGQyYzJjM2VjYTJkZmE1MjhiMTY1MTdmMWVkOWIwZjg1ODg0HgnKFQ==: --dhchap-ctrl-secret DHHC-1:01:ODZmMjBhOTQ4OTQzOTU0YjRjYzkxMTVlZDkzNjNiMDh6+1KJ: 00:15:09.020 14:49:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:09.278 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:09.278 14:49:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:09.278 14:49:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.278 14:49:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.278 14:49:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.278 14:49:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:09.278 14:49:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:09.278 14:49:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:09.536 14:49:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:15:09.536 14:49:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:09.536 14:49:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:09.536 14:49:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:09.536 14:49:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:09.536 14:49:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:09.536 14:49:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:15:09.536 14:49:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.536 14:49:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.536 14:49:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.536 14:49:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:09.536 14:49:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:09.794 00:15:09.794 14:49:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:09.794 14:49:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:09.794 14:49:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:09.794 14:49:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:09.794 14:49:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:09.794 14:49:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.794 14:49:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.794 14:49:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.794 14:49:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:09.794 { 00:15:09.794 "cntlid": 23, 00:15:09.794 "qid": 0, 00:15:09.794 "state": "enabled", 00:15:09.794 "thread": "nvmf_tgt_poll_group_000", 00:15:09.794 "listen_address": { 00:15:09.794 "trtype": "RDMA", 00:15:09.794 "adrfam": "IPv4", 00:15:09.794 "traddr": "192.168.100.8", 00:15:09.794 "trsvcid": "4420" 00:15:09.794 }, 00:15:09.794 "peer_address": { 00:15:09.794 "trtype": "RDMA", 00:15:09.794 "adrfam": "IPv4", 00:15:09.794 "traddr": "192.168.100.8", 00:15:09.794 "trsvcid": "56478" 00:15:09.794 }, 00:15:09.794 "auth": { 00:15:09.794 "state": "completed", 00:15:09.794 "digest": "sha256", 00:15:09.794 "dhgroup": "ffdhe3072" 00:15:09.795 } 00:15:09.795 } 00:15:09.795 ]' 00:15:09.795 14:49:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:10.053 14:49:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:10.053 14:49:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:10.053 14:49:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:10.053 14:49:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:10.053 14:49:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:10.053 14:49:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:10.053 14:49:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:10.311 14:49:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:ODA4MmE1ZGViNGE2YThkNGE1N2UwNTY3MTdkMTQ5NWMyYWRiZTlkZmM1YzY4MmVmNmFkODUzOTNjMDVkMTcyZjoJbsE=: 00:15:10.877 14:49:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:10.877 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:10.877 14:49:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:10.877 14:49:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.877 14:49:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.877 14:49:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.877 14:49:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:10.877 14:49:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:10.877 14:49:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:10.877 14:49:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:11.136 14:49:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:15:11.136 14:49:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:11.136 14:49:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:11.136 14:49:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:11.136 14:49:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:11.136 14:49:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:11.136 14:49:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:11.136 14:49:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.136 14:49:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.136 14:49:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.136 14:49:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:11.136 14:49:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:11.394 00:15:11.395 14:49:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:11.395 14:49:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:11.395 14:49:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:11.653 14:49:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:11.653 14:49:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:11.653 14:49:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.653 14:49:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.653 14:49:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.653 14:49:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:11.653 { 00:15:11.653 "cntlid": 25, 00:15:11.653 "qid": 0, 00:15:11.653 "state": "enabled", 00:15:11.653 "thread": "nvmf_tgt_poll_group_000", 00:15:11.653 "listen_address": { 00:15:11.653 "trtype": "RDMA", 00:15:11.653 "adrfam": "IPv4", 00:15:11.653 "traddr": "192.168.100.8", 00:15:11.653 "trsvcid": "4420" 00:15:11.653 }, 00:15:11.653 "peer_address": { 00:15:11.653 "trtype": "RDMA", 00:15:11.653 "adrfam": "IPv4", 00:15:11.653 "traddr": "192.168.100.8", 00:15:11.653 "trsvcid": "45255" 00:15:11.653 }, 00:15:11.653 "auth": { 00:15:11.653 "state": "completed", 00:15:11.653 "digest": "sha256", 00:15:11.653 "dhgroup": "ffdhe4096" 00:15:11.653 } 00:15:11.653 } 00:15:11.653 ]' 00:15:11.653 14:49:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:11.653 14:49:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:11.653 14:49:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:11.653 14:49:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:11.653 14:49:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:11.653 14:49:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:11.653 14:49:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:11.653 14:49:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:11.912 14:49:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:YThkOTBiZjczYmU0MDA3NjE3N2Q1MDFiODkwOWEzNGY3Mjc1YjNiNzEwM2JhODRm77kZEg==: --dhchap-ctrl-secret DHHC-1:03:NjM5MGE3MDc5MDk5MjdkZmUyOTc4YTBjZTNkOTJkY2ZkNDRhNTVjYmFkMDkwNmIzZjMxMzdkNDRiMmU1OTg0M5gPKx4=: 00:15:12.477 14:49:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:12.736 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:12.736 14:49:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:12.736 14:49:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.736 14:49:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.736 14:49:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.736 14:49:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:12.736 14:49:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:12.736 14:49:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:12.736 14:49:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:15:12.736 14:49:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:12.736 14:49:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:12.736 14:49:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:12.736 14:49:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:12.736 14:49:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:12.736 14:49:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:12.736 14:49:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.736 14:49:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.736 14:49:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.736 14:49:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:12.736 14:49:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:12.994 00:15:12.994 14:49:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:12.994 14:49:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:12.994 14:49:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:13.251 14:49:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:13.251 14:49:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:13.251 14:49:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.251 14:49:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.251 14:49:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.251 14:49:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:13.251 { 00:15:13.251 "cntlid": 27, 00:15:13.252 "qid": 0, 00:15:13.252 "state": "enabled", 00:15:13.252 "thread": "nvmf_tgt_poll_group_000", 00:15:13.252 "listen_address": { 00:15:13.252 "trtype": "RDMA", 00:15:13.252 "adrfam": "IPv4", 00:15:13.252 "traddr": "192.168.100.8", 00:15:13.252 "trsvcid": "4420" 00:15:13.252 }, 00:15:13.252 "peer_address": { 00:15:13.252 "trtype": "RDMA", 00:15:13.252 "adrfam": "IPv4", 00:15:13.252 "traddr": "192.168.100.8", 00:15:13.252 "trsvcid": "39756" 00:15:13.252 }, 00:15:13.252 "auth": { 00:15:13.252 "state": "completed", 00:15:13.252 "digest": "sha256", 00:15:13.252 "dhgroup": "ffdhe4096" 00:15:13.252 } 00:15:13.252 } 00:15:13.252 ]' 00:15:13.252 14:49:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:13.252 14:49:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:13.252 14:49:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:13.252 14:49:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:13.252 14:49:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:13.509 14:49:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:13.509 14:49:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:13.509 14:49:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:13.509 14:49:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZGRlZmEyMzZhNWI2NTdhNDc3ZWVlNjYyYjE3MzhlODLmSnbZ: --dhchap-ctrl-secret DHHC-1:02:ZjkxZDM1NmU0ZTg1ZWYxY2QzOTE0Y2ZjOGVkOTY2MmI4NDU0ZjIzMzFlNTBlNTFiwS9w/Q==: 00:15:14.439 14:49:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:14.439 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:14.439 14:49:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:14.439 14:49:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.439 14:49:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.439 14:49:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.439 14:49:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:14.439 14:49:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:14.439 14:49:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:14.439 14:49:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:15:14.439 14:49:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:14.439 14:49:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:14.439 14:49:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:14.439 14:49:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:14.439 14:49:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:14.439 14:49:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:14.439 14:49:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.439 14:49:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.439 14:49:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.439 14:49:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:14.439 14:49:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:14.695 00:15:14.696 14:49:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:14.696 14:49:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:14.696 14:49:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:14.953 14:49:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:14.953 14:49:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:14.953 14:49:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.953 14:49:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.953 14:49:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.953 14:49:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:14.953 { 00:15:14.953 "cntlid": 29, 00:15:14.953 "qid": 0, 00:15:14.953 "state": "enabled", 00:15:14.953 "thread": "nvmf_tgt_poll_group_000", 00:15:14.953 "listen_address": { 00:15:14.953 "trtype": "RDMA", 00:15:14.953 "adrfam": "IPv4", 00:15:14.953 "traddr": "192.168.100.8", 00:15:14.953 "trsvcid": "4420" 00:15:14.953 }, 00:15:14.953 "peer_address": { 00:15:14.953 "trtype": "RDMA", 00:15:14.953 "adrfam": "IPv4", 00:15:14.953 "traddr": "192.168.100.8", 00:15:14.953 "trsvcid": "45712" 00:15:14.953 }, 00:15:14.953 "auth": { 00:15:14.953 "state": "completed", 00:15:14.953 "digest": "sha256", 00:15:14.953 "dhgroup": "ffdhe4096" 00:15:14.953 } 00:15:14.953 } 00:15:14.953 ]' 00:15:14.953 14:49:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:14.953 14:49:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:14.953 14:49:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:14.953 14:49:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:14.953 14:49:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:14.953 14:49:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:14.953 14:49:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:14.953 14:49:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:15.209 14:49:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:OWU1MjY5MzVmZTRkZGQyYzJjM2VjYTJkZmE1MjhiMTY1MTdmMWVkOWIwZjg1ODg0HgnKFQ==: --dhchap-ctrl-secret DHHC-1:01:ODZmMjBhOTQ4OTQzOTU0YjRjYzkxMTVlZDkzNjNiMDh6+1KJ: 00:15:15.772 14:49:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:16.029 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:16.029 14:49:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:16.029 14:49:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.029 14:49:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.029 14:49:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.029 14:49:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:16.029 14:49:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:16.029 14:49:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:16.285 14:49:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:15:16.285 14:49:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:16.285 14:49:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:16.285 14:49:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:16.285 14:49:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:16.285 14:49:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:16.285 14:49:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:15:16.285 14:49:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.285 14:49:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.285 14:49:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.285 14:49:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:16.285 14:49:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:16.541 00:15:16.541 14:49:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:16.541 14:49:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:16.541 14:49:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:16.541 14:49:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:16.541 14:49:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:16.541 14:49:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.541 14:49:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.798 14:49:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.798 14:49:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:16.798 { 00:15:16.798 "cntlid": 31, 00:15:16.798 "qid": 0, 00:15:16.798 "state": "enabled", 00:15:16.798 "thread": "nvmf_tgt_poll_group_000", 00:15:16.798 "listen_address": { 00:15:16.798 "trtype": "RDMA", 00:15:16.798 "adrfam": "IPv4", 00:15:16.798 "traddr": "192.168.100.8", 00:15:16.798 "trsvcid": "4420" 00:15:16.798 }, 00:15:16.798 "peer_address": { 00:15:16.798 "trtype": "RDMA", 00:15:16.798 "adrfam": "IPv4", 00:15:16.798 "traddr": "192.168.100.8", 00:15:16.798 "trsvcid": "34042" 00:15:16.798 }, 00:15:16.798 "auth": { 00:15:16.798 "state": "completed", 00:15:16.798 "digest": "sha256", 00:15:16.798 "dhgroup": "ffdhe4096" 00:15:16.798 } 00:15:16.798 } 00:15:16.798 ]' 00:15:16.798 14:49:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:16.798 14:49:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:16.798 14:49:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:16.798 14:49:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:16.798 14:49:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:16.798 14:49:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:16.798 14:49:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:16.798 14:49:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:17.055 14:49:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:ODA4MmE1ZGViNGE2YThkNGE1N2UwNTY3MTdkMTQ5NWMyYWRiZTlkZmM1YzY4MmVmNmFkODUzOTNjMDVkMTcyZjoJbsE=: 00:15:17.620 14:49:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:17.620 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:17.620 14:49:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:17.620 14:49:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.620 14:49:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.620 14:49:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.620 14:49:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:17.620 14:49:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:17.620 14:49:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:17.620 14:49:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:17.878 14:49:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:15:17.878 14:49:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:17.878 14:49:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:17.878 14:49:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:17.878 14:49:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:17.878 14:49:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:17.878 14:49:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:17.878 14:49:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.878 14:49:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.878 14:49:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.878 14:49:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:17.878 14:49:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:18.136 00:15:18.136 14:49:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:18.136 14:49:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:18.136 14:49:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:18.394 14:49:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:18.394 14:49:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:18.395 14:49:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.395 14:49:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.395 14:49:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.395 14:49:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:18.395 { 00:15:18.395 "cntlid": 33, 00:15:18.395 "qid": 0, 00:15:18.395 "state": "enabled", 00:15:18.395 "thread": "nvmf_tgt_poll_group_000", 00:15:18.395 "listen_address": { 00:15:18.395 "trtype": "RDMA", 00:15:18.395 "adrfam": "IPv4", 00:15:18.395 "traddr": "192.168.100.8", 00:15:18.395 "trsvcid": "4420" 00:15:18.395 }, 00:15:18.395 "peer_address": { 00:15:18.395 "trtype": "RDMA", 00:15:18.395 "adrfam": "IPv4", 00:15:18.395 "traddr": "192.168.100.8", 00:15:18.395 "trsvcid": "34176" 00:15:18.395 }, 00:15:18.395 "auth": { 00:15:18.395 "state": "completed", 00:15:18.395 "digest": "sha256", 00:15:18.395 "dhgroup": "ffdhe6144" 00:15:18.395 } 00:15:18.395 } 00:15:18.395 ]' 00:15:18.395 14:49:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:18.395 14:49:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:18.395 14:49:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:18.395 14:49:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:18.395 14:49:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:18.653 14:49:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:18.653 14:49:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:18.653 14:49:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:18.653 14:49:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:YThkOTBiZjczYmU0MDA3NjE3N2Q1MDFiODkwOWEzNGY3Mjc1YjNiNzEwM2JhODRm77kZEg==: --dhchap-ctrl-secret DHHC-1:03:NjM5MGE3MDc5MDk5MjdkZmUyOTc4YTBjZTNkOTJkY2ZkNDRhNTVjYmFkMDkwNmIzZjMxMzdkNDRiMmU1OTg0M5gPKx4=: 00:15:19.585 14:49:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:19.585 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:19.585 14:49:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:19.585 14:49:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.585 14:49:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.585 14:49:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.585 14:49:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:19.585 14:49:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:19.585 14:49:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:19.585 14:49:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:15:19.585 14:49:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:19.585 14:49:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:19.585 14:49:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:19.585 14:49:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:19.585 14:49:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:19.585 14:49:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:19.585 14:49:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.585 14:49:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.585 14:49:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.585 14:49:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:19.585 14:49:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:20.150 00:15:20.151 14:49:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:20.151 14:49:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:20.151 14:49:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:20.151 14:49:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:20.151 14:49:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:20.151 14:49:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.151 14:49:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.151 14:49:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.151 14:49:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:20.151 { 00:15:20.151 "cntlid": 35, 00:15:20.151 "qid": 0, 00:15:20.151 "state": "enabled", 00:15:20.151 "thread": "nvmf_tgt_poll_group_000", 00:15:20.151 "listen_address": { 00:15:20.151 "trtype": "RDMA", 00:15:20.151 "adrfam": "IPv4", 00:15:20.151 "traddr": "192.168.100.8", 00:15:20.151 "trsvcid": "4420" 00:15:20.151 }, 00:15:20.151 "peer_address": { 00:15:20.151 "trtype": "RDMA", 00:15:20.151 "adrfam": "IPv4", 00:15:20.151 "traddr": "192.168.100.8", 00:15:20.151 "trsvcid": "46988" 00:15:20.151 }, 00:15:20.151 "auth": { 00:15:20.151 "state": "completed", 00:15:20.151 "digest": "sha256", 00:15:20.151 "dhgroup": "ffdhe6144" 00:15:20.151 } 00:15:20.151 } 00:15:20.151 ]' 00:15:20.151 14:49:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:20.151 14:49:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:20.151 14:49:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:20.408 14:49:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:20.409 14:49:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:20.409 14:49:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:20.409 14:49:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:20.409 14:49:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:20.409 14:49:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZGRlZmEyMzZhNWI2NTdhNDc3ZWVlNjYyYjE3MzhlODLmSnbZ: --dhchap-ctrl-secret DHHC-1:02:ZjkxZDM1NmU0ZTg1ZWYxY2QzOTE0Y2ZjOGVkOTY2MmI4NDU0ZjIzMzFlNTBlNTFiwS9w/Q==: 00:15:21.341 14:49:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:21.341 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:21.341 14:49:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:21.341 14:49:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:21.341 14:49:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.341 14:49:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:21.341 14:49:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:21.341 14:49:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:21.341 14:49:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:21.341 14:49:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:15:21.341 14:49:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:21.341 14:49:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:21.341 14:49:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:21.341 14:49:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:21.341 14:49:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:21.341 14:49:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:21.341 14:49:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:21.341 14:49:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.341 14:49:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:21.341 14:49:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:21.341 14:49:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:21.906 00:15:21.906 14:49:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:21.906 14:49:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:21.906 14:49:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:21.906 14:49:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:21.906 14:49:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:21.906 14:49:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:21.906 14:49:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.906 14:49:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:21.906 14:49:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:21.907 { 00:15:21.907 "cntlid": 37, 00:15:21.907 "qid": 0, 00:15:21.907 "state": "enabled", 00:15:21.907 "thread": "nvmf_tgt_poll_group_000", 00:15:21.907 "listen_address": { 00:15:21.907 "trtype": "RDMA", 00:15:21.907 "adrfam": "IPv4", 00:15:21.907 "traddr": "192.168.100.8", 00:15:21.907 "trsvcid": "4420" 00:15:21.907 }, 00:15:21.907 "peer_address": { 00:15:21.907 "trtype": "RDMA", 00:15:21.907 "adrfam": "IPv4", 00:15:21.907 "traddr": "192.168.100.8", 00:15:21.907 "trsvcid": "55931" 00:15:21.907 }, 00:15:21.907 "auth": { 00:15:21.907 "state": "completed", 00:15:21.907 "digest": "sha256", 00:15:21.907 "dhgroup": "ffdhe6144" 00:15:21.907 } 00:15:21.907 } 00:15:21.907 ]' 00:15:21.907 14:49:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:21.907 14:49:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:21.907 14:49:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:21.907 14:49:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:21.907 14:49:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:22.164 14:49:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:22.164 14:49:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:22.164 14:49:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:22.164 14:49:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:OWU1MjY5MzVmZTRkZGQyYzJjM2VjYTJkZmE1MjhiMTY1MTdmMWVkOWIwZjg1ODg0HgnKFQ==: --dhchap-ctrl-secret DHHC-1:01:ODZmMjBhOTQ4OTQzOTU0YjRjYzkxMTVlZDkzNjNiMDh6+1KJ: 00:15:22.729 14:49:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:22.986 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:22.986 14:49:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:22.986 14:49:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:22.986 14:49:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.986 14:49:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:22.986 14:49:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:22.986 14:49:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:22.986 14:49:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:23.243 14:49:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:15:23.243 14:49:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:23.243 14:49:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:23.243 14:49:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:23.243 14:49:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:23.243 14:49:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:23.243 14:49:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:15:23.243 14:49:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.243 14:49:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.243 14:49:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.243 14:49:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:23.243 14:49:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:23.501 00:15:23.501 14:49:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:23.501 14:49:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:23.501 14:49:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:23.759 14:49:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:23.759 14:49:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:23.759 14:49:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.759 14:49:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.759 14:49:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.759 14:49:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:23.759 { 00:15:23.759 "cntlid": 39, 00:15:23.759 "qid": 0, 00:15:23.759 "state": "enabled", 00:15:23.759 "thread": "nvmf_tgt_poll_group_000", 00:15:23.759 "listen_address": { 00:15:23.759 "trtype": "RDMA", 00:15:23.759 "adrfam": "IPv4", 00:15:23.759 "traddr": "192.168.100.8", 00:15:23.759 "trsvcid": "4420" 00:15:23.759 }, 00:15:23.759 "peer_address": { 00:15:23.759 "trtype": "RDMA", 00:15:23.759 "adrfam": "IPv4", 00:15:23.759 "traddr": "192.168.100.8", 00:15:23.759 "trsvcid": "43043" 00:15:23.759 }, 00:15:23.759 "auth": { 00:15:23.759 "state": "completed", 00:15:23.759 "digest": "sha256", 00:15:23.759 "dhgroup": "ffdhe6144" 00:15:23.759 } 00:15:23.759 } 00:15:23.759 ]' 00:15:23.759 14:49:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:23.759 14:49:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:23.759 14:49:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:23.759 14:49:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:23.759 14:49:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:23.759 14:49:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:23.759 14:49:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:23.759 14:49:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:24.016 14:49:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:ODA4MmE1ZGViNGE2YThkNGE1N2UwNTY3MTdkMTQ5NWMyYWRiZTlkZmM1YzY4MmVmNmFkODUzOTNjMDVkMTcyZjoJbsE=: 00:15:24.581 14:49:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:24.581 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:24.581 14:49:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:24.581 14:49:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.581 14:49:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.837 14:49:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.837 14:49:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:24.837 14:49:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:24.837 14:49:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:24.837 14:49:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:24.837 14:49:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:15:24.837 14:49:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:24.837 14:49:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:24.837 14:49:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:24.837 14:49:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:24.837 14:49:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:24.837 14:49:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:24.837 14:49:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.837 14:49:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.837 14:49:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.837 14:49:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:24.837 14:49:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:25.399 00:15:25.399 14:49:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:25.399 14:49:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:25.399 14:49:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:25.399 14:49:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:25.656 14:49:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:25.656 14:49:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.656 14:49:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.656 14:49:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.656 14:49:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:25.656 { 00:15:25.656 "cntlid": 41, 00:15:25.656 "qid": 0, 00:15:25.657 "state": "enabled", 00:15:25.657 "thread": "nvmf_tgt_poll_group_000", 00:15:25.657 "listen_address": { 00:15:25.657 "trtype": "RDMA", 00:15:25.657 "adrfam": "IPv4", 00:15:25.657 "traddr": "192.168.100.8", 00:15:25.657 "trsvcid": "4420" 00:15:25.657 }, 00:15:25.657 "peer_address": { 00:15:25.657 "trtype": "RDMA", 00:15:25.657 "adrfam": "IPv4", 00:15:25.657 "traddr": "192.168.100.8", 00:15:25.657 "trsvcid": "54484" 00:15:25.657 }, 00:15:25.657 "auth": { 00:15:25.657 "state": "completed", 00:15:25.657 "digest": "sha256", 00:15:25.657 "dhgroup": "ffdhe8192" 00:15:25.657 } 00:15:25.657 } 00:15:25.657 ]' 00:15:25.657 14:49:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:25.657 14:49:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:25.657 14:49:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:25.657 14:49:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:25.657 14:49:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:25.657 14:49:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:25.657 14:49:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:25.657 14:49:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:25.914 14:49:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:YThkOTBiZjczYmU0MDA3NjE3N2Q1MDFiODkwOWEzNGY3Mjc1YjNiNzEwM2JhODRm77kZEg==: --dhchap-ctrl-secret DHHC-1:03:NjM5MGE3MDc5MDk5MjdkZmUyOTc4YTBjZTNkOTJkY2ZkNDRhNTVjYmFkMDkwNmIzZjMxMzdkNDRiMmU1OTg0M5gPKx4=: 00:15:26.480 14:50:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:26.480 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:26.480 14:50:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:26.480 14:50:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.480 14:50:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.480 14:50:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.480 14:50:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:26.480 14:50:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:26.480 14:50:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:26.738 14:50:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:15:26.738 14:50:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:26.738 14:50:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:26.738 14:50:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:26.738 14:50:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:26.738 14:50:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:26.738 14:50:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:26.738 14:50:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.738 14:50:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.738 14:50:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.738 14:50:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:26.738 14:50:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:27.303 00:15:27.303 14:50:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:27.303 14:50:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:27.303 14:50:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:27.303 14:50:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:27.303 14:50:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:27.303 14:50:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.303 14:50:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.303 14:50:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.303 14:50:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:27.303 { 00:15:27.303 "cntlid": 43, 00:15:27.303 "qid": 0, 00:15:27.303 "state": "enabled", 00:15:27.303 "thread": "nvmf_tgt_poll_group_000", 00:15:27.303 "listen_address": { 00:15:27.303 "trtype": "RDMA", 00:15:27.303 "adrfam": "IPv4", 00:15:27.303 "traddr": "192.168.100.8", 00:15:27.303 "trsvcid": "4420" 00:15:27.303 }, 00:15:27.303 "peer_address": { 00:15:27.303 "trtype": "RDMA", 00:15:27.303 "adrfam": "IPv4", 00:15:27.303 "traddr": "192.168.100.8", 00:15:27.303 "trsvcid": "59948" 00:15:27.303 }, 00:15:27.303 "auth": { 00:15:27.303 "state": "completed", 00:15:27.303 "digest": "sha256", 00:15:27.303 "dhgroup": "ffdhe8192" 00:15:27.303 } 00:15:27.303 } 00:15:27.303 ]' 00:15:27.303 14:50:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:27.303 14:50:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:27.303 14:50:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:27.560 14:50:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:27.560 14:50:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:27.560 14:50:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:27.560 14:50:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:27.560 14:50:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:27.818 14:50:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZGRlZmEyMzZhNWI2NTdhNDc3ZWVlNjYyYjE3MzhlODLmSnbZ: --dhchap-ctrl-secret DHHC-1:02:ZjkxZDM1NmU0ZTg1ZWYxY2QzOTE0Y2ZjOGVkOTY2MmI4NDU0ZjIzMzFlNTBlNTFiwS9w/Q==: 00:15:28.383 14:50:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:28.383 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:28.383 14:50:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:28.383 14:50:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.383 14:50:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.383 14:50:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.383 14:50:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:28.383 14:50:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:28.383 14:50:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:28.639 14:50:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:15:28.639 14:50:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:28.639 14:50:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:28.639 14:50:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:28.639 14:50:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:28.639 14:50:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:28.639 14:50:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:28.639 14:50:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.639 14:50:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.639 14:50:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.639 14:50:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:28.639 14:50:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:29.204 00:15:29.204 14:50:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:29.204 14:50:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:29.204 14:50:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:29.204 14:50:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:29.204 14:50:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:29.204 14:50:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.204 14:50:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.204 14:50:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.204 14:50:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:29.204 { 00:15:29.204 "cntlid": 45, 00:15:29.204 "qid": 0, 00:15:29.204 "state": "enabled", 00:15:29.204 "thread": "nvmf_tgt_poll_group_000", 00:15:29.204 "listen_address": { 00:15:29.204 "trtype": "RDMA", 00:15:29.204 "adrfam": "IPv4", 00:15:29.204 "traddr": "192.168.100.8", 00:15:29.204 "trsvcid": "4420" 00:15:29.204 }, 00:15:29.204 "peer_address": { 00:15:29.204 "trtype": "RDMA", 00:15:29.204 "adrfam": "IPv4", 00:15:29.204 "traddr": "192.168.100.8", 00:15:29.204 "trsvcid": "52815" 00:15:29.204 }, 00:15:29.204 "auth": { 00:15:29.204 "state": "completed", 00:15:29.204 "digest": "sha256", 00:15:29.204 "dhgroup": "ffdhe8192" 00:15:29.204 } 00:15:29.204 } 00:15:29.204 ]' 00:15:29.204 14:50:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:29.204 14:50:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:29.204 14:50:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:29.460 14:50:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:29.460 14:50:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:29.460 14:50:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:29.460 14:50:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:29.460 14:50:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:29.460 14:50:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:OWU1MjY5MzVmZTRkZGQyYzJjM2VjYTJkZmE1MjhiMTY1MTdmMWVkOWIwZjg1ODg0HgnKFQ==: --dhchap-ctrl-secret DHHC-1:01:ODZmMjBhOTQ4OTQzOTU0YjRjYzkxMTVlZDkzNjNiMDh6+1KJ: 00:15:30.392 14:50:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:30.392 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:30.392 14:50:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:30.392 14:50:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.392 14:50:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.392 14:50:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.392 14:50:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:30.392 14:50:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:30.392 14:50:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:30.392 14:50:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:15:30.392 14:50:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:30.392 14:50:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:30.392 14:50:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:30.392 14:50:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:30.392 14:50:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:30.392 14:50:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:15:30.392 14:50:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.392 14:50:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.393 14:50:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.393 14:50:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:30.393 14:50:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:30.958 00:15:30.958 14:50:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:30.958 14:50:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:30.958 14:50:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:31.215 14:50:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:31.215 14:50:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:31.215 14:50:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.215 14:50:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.215 14:50:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.215 14:50:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:31.215 { 00:15:31.215 "cntlid": 47, 00:15:31.215 "qid": 0, 00:15:31.215 "state": "enabled", 00:15:31.215 "thread": "nvmf_tgt_poll_group_000", 00:15:31.215 "listen_address": { 00:15:31.215 "trtype": "RDMA", 00:15:31.215 "adrfam": "IPv4", 00:15:31.215 "traddr": "192.168.100.8", 00:15:31.215 "trsvcid": "4420" 00:15:31.215 }, 00:15:31.215 "peer_address": { 00:15:31.215 "trtype": "RDMA", 00:15:31.215 "adrfam": "IPv4", 00:15:31.215 "traddr": "192.168.100.8", 00:15:31.215 "trsvcid": "52321" 00:15:31.215 }, 00:15:31.215 "auth": { 00:15:31.215 "state": "completed", 00:15:31.215 "digest": "sha256", 00:15:31.215 "dhgroup": "ffdhe8192" 00:15:31.215 } 00:15:31.215 } 00:15:31.215 ]' 00:15:31.215 14:50:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:31.215 14:50:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:31.215 14:50:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:31.215 14:50:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:31.215 14:50:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:31.215 14:50:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:31.215 14:50:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:31.215 14:50:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:31.472 14:50:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:ODA4MmE1ZGViNGE2YThkNGE1N2UwNTY3MTdkMTQ5NWMyYWRiZTlkZmM1YzY4MmVmNmFkODUzOTNjMDVkMTcyZjoJbsE=: 00:15:32.037 14:50:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:32.037 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:32.037 14:50:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:32.037 14:50:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.037 14:50:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.037 14:50:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.037 14:50:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:15:32.037 14:50:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:32.037 14:50:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:32.037 14:50:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:32.037 14:50:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:32.294 14:50:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:15:32.294 14:50:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:32.294 14:50:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:32.294 14:50:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:32.294 14:50:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:32.294 14:50:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:32.294 14:50:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:32.294 14:50:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.294 14:50:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.294 14:50:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.294 14:50:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:32.294 14:50:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:32.551 00:15:32.551 14:50:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:32.551 14:50:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:32.551 14:50:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:32.807 14:50:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:32.807 14:50:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:32.807 14:50:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.807 14:50:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.807 14:50:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.807 14:50:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:32.807 { 00:15:32.807 "cntlid": 49, 00:15:32.807 "qid": 0, 00:15:32.807 "state": "enabled", 00:15:32.807 "thread": "nvmf_tgt_poll_group_000", 00:15:32.807 "listen_address": { 00:15:32.807 "trtype": "RDMA", 00:15:32.807 "adrfam": "IPv4", 00:15:32.807 "traddr": "192.168.100.8", 00:15:32.807 "trsvcid": "4420" 00:15:32.807 }, 00:15:32.808 "peer_address": { 00:15:32.808 "trtype": "RDMA", 00:15:32.808 "adrfam": "IPv4", 00:15:32.808 "traddr": "192.168.100.8", 00:15:32.808 "trsvcid": "35880" 00:15:32.808 }, 00:15:32.808 "auth": { 00:15:32.808 "state": "completed", 00:15:32.808 "digest": "sha384", 00:15:32.808 "dhgroup": "null" 00:15:32.808 } 00:15:32.808 } 00:15:32.808 ]' 00:15:32.808 14:50:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:32.808 14:50:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:32.808 14:50:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:32.808 14:50:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:32.808 14:50:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:32.808 14:50:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:32.808 14:50:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:32.808 14:50:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:33.063 14:50:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:YThkOTBiZjczYmU0MDA3NjE3N2Q1MDFiODkwOWEzNGY3Mjc1YjNiNzEwM2JhODRm77kZEg==: --dhchap-ctrl-secret DHHC-1:03:NjM5MGE3MDc5MDk5MjdkZmUyOTc4YTBjZTNkOTJkY2ZkNDRhNTVjYmFkMDkwNmIzZjMxMzdkNDRiMmU1OTg0M5gPKx4=: 00:15:33.627 14:50:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:33.884 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:33.884 14:50:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:33.884 14:50:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.884 14:50:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.884 14:50:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.884 14:50:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:33.884 14:50:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:33.884 14:50:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:33.884 14:50:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:15:33.884 14:50:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:33.884 14:50:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:33.884 14:50:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:33.884 14:50:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:33.884 14:50:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:33.884 14:50:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:33.884 14:50:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.884 14:50:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.141 14:50:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.141 14:50:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:34.141 14:50:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:34.141 00:15:34.141 14:50:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:34.142 14:50:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:34.142 14:50:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:34.400 14:50:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:34.400 14:50:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:34.400 14:50:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.400 14:50:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.400 14:50:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.400 14:50:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:34.400 { 00:15:34.400 "cntlid": 51, 00:15:34.400 "qid": 0, 00:15:34.400 "state": "enabled", 00:15:34.400 "thread": "nvmf_tgt_poll_group_000", 00:15:34.400 "listen_address": { 00:15:34.400 "trtype": "RDMA", 00:15:34.400 "adrfam": "IPv4", 00:15:34.400 "traddr": "192.168.100.8", 00:15:34.400 "trsvcid": "4420" 00:15:34.400 }, 00:15:34.400 "peer_address": { 00:15:34.400 "trtype": "RDMA", 00:15:34.400 "adrfam": "IPv4", 00:15:34.400 "traddr": "192.168.100.8", 00:15:34.400 "trsvcid": "40086" 00:15:34.400 }, 00:15:34.400 "auth": { 00:15:34.400 "state": "completed", 00:15:34.400 "digest": "sha384", 00:15:34.400 "dhgroup": "null" 00:15:34.400 } 00:15:34.400 } 00:15:34.400 ]' 00:15:34.400 14:50:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:34.400 14:50:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:34.400 14:50:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:34.400 14:50:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:34.400 14:50:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:34.659 14:50:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:34.659 14:50:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:34.659 14:50:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:34.659 14:50:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZGRlZmEyMzZhNWI2NTdhNDc3ZWVlNjYyYjE3MzhlODLmSnbZ: --dhchap-ctrl-secret DHHC-1:02:ZjkxZDM1NmU0ZTg1ZWYxY2QzOTE0Y2ZjOGVkOTY2MmI4NDU0ZjIzMzFlNTBlNTFiwS9w/Q==: 00:15:35.240 14:50:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:35.498 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:35.498 14:50:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:35.498 14:50:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.498 14:50:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.498 14:50:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.498 14:50:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:35.498 14:50:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:35.498 14:50:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:35.498 14:50:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:15:35.498 14:50:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:35.498 14:50:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:35.498 14:50:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:35.498 14:50:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:35.498 14:50:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:35.498 14:50:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:35.498 14:50:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.498 14:50:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.755 14:50:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.755 14:50:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:35.755 14:50:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:35.755 00:15:35.755 14:50:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:35.755 14:50:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:35.755 14:50:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:36.013 14:50:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:36.013 14:50:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:36.013 14:50:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.013 14:50:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.013 14:50:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.013 14:50:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:36.013 { 00:15:36.013 "cntlid": 53, 00:15:36.013 "qid": 0, 00:15:36.013 "state": "enabled", 00:15:36.013 "thread": "nvmf_tgt_poll_group_000", 00:15:36.013 "listen_address": { 00:15:36.013 "trtype": "RDMA", 00:15:36.013 "adrfam": "IPv4", 00:15:36.013 "traddr": "192.168.100.8", 00:15:36.013 "trsvcid": "4420" 00:15:36.013 }, 00:15:36.013 "peer_address": { 00:15:36.013 "trtype": "RDMA", 00:15:36.013 "adrfam": "IPv4", 00:15:36.013 "traddr": "192.168.100.8", 00:15:36.013 "trsvcid": "43961" 00:15:36.013 }, 00:15:36.013 "auth": { 00:15:36.013 "state": "completed", 00:15:36.013 "digest": "sha384", 00:15:36.013 "dhgroup": "null" 00:15:36.013 } 00:15:36.013 } 00:15:36.013 ]' 00:15:36.013 14:50:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:36.013 14:50:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:36.013 14:50:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:36.271 14:50:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:36.271 14:50:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:36.271 14:50:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:36.271 14:50:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:36.271 14:50:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:36.271 14:50:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:OWU1MjY5MzVmZTRkZGQyYzJjM2VjYTJkZmE1MjhiMTY1MTdmMWVkOWIwZjg1ODg0HgnKFQ==: --dhchap-ctrl-secret DHHC-1:01:ODZmMjBhOTQ4OTQzOTU0YjRjYzkxMTVlZDkzNjNiMDh6+1KJ: 00:15:37.205 14:50:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:37.205 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:37.205 14:50:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:37.205 14:50:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.205 14:50:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.205 14:50:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.205 14:50:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:37.205 14:50:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:37.205 14:50:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:37.205 14:50:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:15:37.205 14:50:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:37.205 14:50:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:37.205 14:50:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:37.205 14:50:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:37.205 14:50:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:37.205 14:50:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:15:37.205 14:50:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.205 14:50:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.205 14:50:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.205 14:50:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:37.205 14:50:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:37.463 00:15:37.463 14:50:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:37.463 14:50:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:37.463 14:50:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:37.720 14:50:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:37.720 14:50:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:37.720 14:50:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.720 14:50:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.720 14:50:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.720 14:50:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:37.720 { 00:15:37.720 "cntlid": 55, 00:15:37.720 "qid": 0, 00:15:37.720 "state": "enabled", 00:15:37.720 "thread": "nvmf_tgt_poll_group_000", 00:15:37.720 "listen_address": { 00:15:37.720 "trtype": "RDMA", 00:15:37.720 "adrfam": "IPv4", 00:15:37.720 "traddr": "192.168.100.8", 00:15:37.720 "trsvcid": "4420" 00:15:37.720 }, 00:15:37.720 "peer_address": { 00:15:37.720 "trtype": "RDMA", 00:15:37.720 "adrfam": "IPv4", 00:15:37.720 "traddr": "192.168.100.8", 00:15:37.720 "trsvcid": "34796" 00:15:37.720 }, 00:15:37.720 "auth": { 00:15:37.720 "state": "completed", 00:15:37.720 "digest": "sha384", 00:15:37.720 "dhgroup": "null" 00:15:37.720 } 00:15:37.720 } 00:15:37.720 ]' 00:15:37.720 14:50:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:37.720 14:50:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:37.720 14:50:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:37.720 14:50:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:37.720 14:50:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:37.977 14:50:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:37.977 14:50:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:37.977 14:50:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:37.977 14:50:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:ODA4MmE1ZGViNGE2YThkNGE1N2UwNTY3MTdkMTQ5NWMyYWRiZTlkZmM1YzY4MmVmNmFkODUzOTNjMDVkMTcyZjoJbsE=: 00:15:38.554 14:50:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:38.893 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:38.893 14:50:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:38.893 14:50:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.893 14:50:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.893 14:50:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.893 14:50:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:38.893 14:50:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:38.893 14:50:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:38.893 14:50:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:38.893 14:50:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:15:38.893 14:50:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:38.893 14:50:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:38.893 14:50:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:38.893 14:50:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:38.893 14:50:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:38.893 14:50:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:38.893 14:50:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.893 14:50:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.893 14:50:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.893 14:50:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:38.893 14:50:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:39.192 00:15:39.192 14:50:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:39.192 14:50:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:39.192 14:50:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:39.449 14:50:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.449 14:50:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:39.449 14:50:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.449 14:50:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.449 14:50:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.449 14:50:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:39.449 { 00:15:39.449 "cntlid": 57, 00:15:39.449 "qid": 0, 00:15:39.449 "state": "enabled", 00:15:39.449 "thread": "nvmf_tgt_poll_group_000", 00:15:39.449 "listen_address": { 00:15:39.449 "trtype": "RDMA", 00:15:39.449 "adrfam": "IPv4", 00:15:39.449 "traddr": "192.168.100.8", 00:15:39.449 "trsvcid": "4420" 00:15:39.449 }, 00:15:39.449 "peer_address": { 00:15:39.449 "trtype": "RDMA", 00:15:39.449 "adrfam": "IPv4", 00:15:39.449 "traddr": "192.168.100.8", 00:15:39.449 "trsvcid": "33145" 00:15:39.449 }, 00:15:39.449 "auth": { 00:15:39.449 "state": "completed", 00:15:39.449 "digest": "sha384", 00:15:39.449 "dhgroup": "ffdhe2048" 00:15:39.449 } 00:15:39.449 } 00:15:39.449 ]' 00:15:39.449 14:50:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:39.449 14:50:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:39.449 14:50:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:39.449 14:50:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:39.449 14:50:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:39.449 14:50:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:39.449 14:50:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:39.449 14:50:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:39.706 14:50:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:YThkOTBiZjczYmU0MDA3NjE3N2Q1MDFiODkwOWEzNGY3Mjc1YjNiNzEwM2JhODRm77kZEg==: --dhchap-ctrl-secret DHHC-1:03:NjM5MGE3MDc5MDk5MjdkZmUyOTc4YTBjZTNkOTJkY2ZkNDRhNTVjYmFkMDkwNmIzZjMxMzdkNDRiMmU1OTg0M5gPKx4=: 00:15:40.270 14:50:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:40.527 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:40.527 14:50:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:40.527 14:50:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.527 14:50:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.527 14:50:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.527 14:50:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:40.527 14:50:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:40.527 14:50:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:40.527 14:50:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:15:40.527 14:50:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:40.528 14:50:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:40.528 14:50:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:40.528 14:50:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:40.528 14:50:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:40.528 14:50:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:40.528 14:50:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.528 14:50:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.528 14:50:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.528 14:50:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:40.528 14:50:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:40.784 00:15:40.784 14:50:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:40.784 14:50:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:40.784 14:50:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:41.041 14:50:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:41.041 14:50:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:41.041 14:50:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.041 14:50:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.041 14:50:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.041 14:50:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:41.041 { 00:15:41.041 "cntlid": 59, 00:15:41.041 "qid": 0, 00:15:41.041 "state": "enabled", 00:15:41.041 "thread": "nvmf_tgt_poll_group_000", 00:15:41.041 "listen_address": { 00:15:41.041 "trtype": "RDMA", 00:15:41.041 "adrfam": "IPv4", 00:15:41.041 "traddr": "192.168.100.8", 00:15:41.041 "trsvcid": "4420" 00:15:41.041 }, 00:15:41.041 "peer_address": { 00:15:41.041 "trtype": "RDMA", 00:15:41.041 "adrfam": "IPv4", 00:15:41.041 "traddr": "192.168.100.8", 00:15:41.041 "trsvcid": "59317" 00:15:41.041 }, 00:15:41.041 "auth": { 00:15:41.041 "state": "completed", 00:15:41.041 "digest": "sha384", 00:15:41.041 "dhgroup": "ffdhe2048" 00:15:41.041 } 00:15:41.041 } 00:15:41.041 ]' 00:15:41.041 14:50:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:41.041 14:50:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:41.041 14:50:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:41.041 14:50:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:41.041 14:50:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:41.041 14:50:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:41.041 14:50:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:41.041 14:50:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:41.299 14:50:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZGRlZmEyMzZhNWI2NTdhNDc3ZWVlNjYyYjE3MzhlODLmSnbZ: --dhchap-ctrl-secret DHHC-1:02:ZjkxZDM1NmU0ZTg1ZWYxY2QzOTE0Y2ZjOGVkOTY2MmI4NDU0ZjIzMzFlNTBlNTFiwS9w/Q==: 00:15:41.861 14:50:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:42.117 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:42.117 14:50:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:42.117 14:50:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.117 14:50:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.117 14:50:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.117 14:50:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:42.117 14:50:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:42.117 14:50:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:42.372 14:50:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:15:42.372 14:50:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:42.372 14:50:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:42.372 14:50:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:42.372 14:50:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:42.372 14:50:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:42.372 14:50:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:42.372 14:50:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.372 14:50:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.372 14:50:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.372 14:50:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:42.373 14:50:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:42.629 00:15:42.629 14:50:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:42.629 14:50:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:42.629 14:50:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:42.629 14:50:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:42.629 14:50:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:42.629 14:50:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.629 14:50:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.629 14:50:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.629 14:50:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:42.629 { 00:15:42.629 "cntlid": 61, 00:15:42.629 "qid": 0, 00:15:42.629 "state": "enabled", 00:15:42.629 "thread": "nvmf_tgt_poll_group_000", 00:15:42.629 "listen_address": { 00:15:42.629 "trtype": "RDMA", 00:15:42.629 "adrfam": "IPv4", 00:15:42.629 "traddr": "192.168.100.8", 00:15:42.629 "trsvcid": "4420" 00:15:42.629 }, 00:15:42.629 "peer_address": { 00:15:42.629 "trtype": "RDMA", 00:15:42.629 "adrfam": "IPv4", 00:15:42.629 "traddr": "192.168.100.8", 00:15:42.629 "trsvcid": "49886" 00:15:42.629 }, 00:15:42.629 "auth": { 00:15:42.629 "state": "completed", 00:15:42.629 "digest": "sha384", 00:15:42.629 "dhgroup": "ffdhe2048" 00:15:42.629 } 00:15:42.629 } 00:15:42.629 ]' 00:15:42.629 14:50:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:42.885 14:50:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:42.885 14:50:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:42.885 14:50:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:42.885 14:50:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:42.885 14:50:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:42.885 14:50:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:42.885 14:50:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:43.142 14:50:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:OWU1MjY5MzVmZTRkZGQyYzJjM2VjYTJkZmE1MjhiMTY1MTdmMWVkOWIwZjg1ODg0HgnKFQ==: --dhchap-ctrl-secret DHHC-1:01:ODZmMjBhOTQ4OTQzOTU0YjRjYzkxMTVlZDkzNjNiMDh6+1KJ: 00:15:43.706 14:50:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:43.706 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:43.706 14:50:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:43.706 14:50:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.706 14:50:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.706 14:50:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.706 14:50:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:43.706 14:50:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:43.706 14:50:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:43.963 14:50:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:15:43.963 14:50:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:43.963 14:50:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:43.963 14:50:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:43.963 14:50:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:43.963 14:50:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:43.963 14:50:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:15:43.963 14:50:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.963 14:50:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.963 14:50:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.963 14:50:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:43.963 14:50:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:44.221 00:15:44.221 14:50:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:44.221 14:50:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:44.221 14:50:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:44.478 14:50:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:44.478 14:50:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:44.478 14:50:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.478 14:50:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.478 14:50:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.478 14:50:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:44.478 { 00:15:44.478 "cntlid": 63, 00:15:44.478 "qid": 0, 00:15:44.478 "state": "enabled", 00:15:44.478 "thread": "nvmf_tgt_poll_group_000", 00:15:44.478 "listen_address": { 00:15:44.478 "trtype": "RDMA", 00:15:44.478 "adrfam": "IPv4", 00:15:44.478 "traddr": "192.168.100.8", 00:15:44.478 "trsvcid": "4420" 00:15:44.478 }, 00:15:44.478 "peer_address": { 00:15:44.478 "trtype": "RDMA", 00:15:44.478 "adrfam": "IPv4", 00:15:44.478 "traddr": "192.168.100.8", 00:15:44.478 "trsvcid": "36865" 00:15:44.478 }, 00:15:44.478 "auth": { 00:15:44.478 "state": "completed", 00:15:44.478 "digest": "sha384", 00:15:44.478 "dhgroup": "ffdhe2048" 00:15:44.478 } 00:15:44.478 } 00:15:44.478 ]' 00:15:44.478 14:50:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:44.478 14:50:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:44.478 14:50:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:44.478 14:50:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:44.478 14:50:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:44.478 14:50:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:44.478 14:50:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:44.478 14:50:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:44.735 14:50:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:ODA4MmE1ZGViNGE2YThkNGE1N2UwNTY3MTdkMTQ5NWMyYWRiZTlkZmM1YzY4MmVmNmFkODUzOTNjMDVkMTcyZjoJbsE=: 00:15:45.299 14:50:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:45.299 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:45.299 14:50:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:45.299 14:50:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.299 14:50:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.299 14:50:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.299 14:50:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:45.299 14:50:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:45.299 14:50:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:45.299 14:50:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:45.556 14:50:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:15:45.556 14:50:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:45.556 14:50:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:45.556 14:50:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:45.556 14:50:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:45.556 14:50:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:45.556 14:50:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:45.556 14:50:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.556 14:50:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.556 14:50:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.556 14:50:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:45.556 14:50:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:45.813 00:15:45.814 14:50:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:45.814 14:50:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:45.814 14:50:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:46.071 14:50:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:46.071 14:50:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:46.071 14:50:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.071 14:50:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.071 14:50:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.071 14:50:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:46.071 { 00:15:46.071 "cntlid": 65, 00:15:46.071 "qid": 0, 00:15:46.071 "state": "enabled", 00:15:46.071 "thread": "nvmf_tgt_poll_group_000", 00:15:46.071 "listen_address": { 00:15:46.071 "trtype": "RDMA", 00:15:46.071 "adrfam": "IPv4", 00:15:46.071 "traddr": "192.168.100.8", 00:15:46.071 "trsvcid": "4420" 00:15:46.071 }, 00:15:46.071 "peer_address": { 00:15:46.071 "trtype": "RDMA", 00:15:46.071 "adrfam": "IPv4", 00:15:46.071 "traddr": "192.168.100.8", 00:15:46.071 "trsvcid": "44559" 00:15:46.071 }, 00:15:46.071 "auth": { 00:15:46.071 "state": "completed", 00:15:46.071 "digest": "sha384", 00:15:46.071 "dhgroup": "ffdhe3072" 00:15:46.071 } 00:15:46.071 } 00:15:46.071 ]' 00:15:46.071 14:50:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:46.071 14:50:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:46.071 14:50:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:46.071 14:50:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:46.071 14:50:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:46.071 14:50:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:46.071 14:50:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:46.071 14:50:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:46.328 14:50:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:YThkOTBiZjczYmU0MDA3NjE3N2Q1MDFiODkwOWEzNGY3Mjc1YjNiNzEwM2JhODRm77kZEg==: --dhchap-ctrl-secret DHHC-1:03:NjM5MGE3MDc5MDk5MjdkZmUyOTc4YTBjZTNkOTJkY2ZkNDRhNTVjYmFkMDkwNmIzZjMxMzdkNDRiMmU1OTg0M5gPKx4=: 00:15:46.891 14:50:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:47.148 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:47.148 14:50:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:47.148 14:50:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.148 14:50:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.148 14:50:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.148 14:50:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:47.148 14:50:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:47.148 14:50:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:47.148 14:50:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:15:47.148 14:50:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:47.148 14:50:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:47.148 14:50:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:47.148 14:50:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:47.148 14:50:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:47.148 14:50:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:47.148 14:50:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.148 14:50:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.148 14:50:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.148 14:50:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:47.148 14:50:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:47.405 00:15:47.405 14:50:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:47.405 14:50:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:47.405 14:50:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:47.663 14:50:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:47.663 14:50:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:47.663 14:50:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.663 14:50:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.663 14:50:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.663 14:50:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:47.663 { 00:15:47.663 "cntlid": 67, 00:15:47.663 "qid": 0, 00:15:47.663 "state": "enabled", 00:15:47.663 "thread": "nvmf_tgt_poll_group_000", 00:15:47.663 "listen_address": { 00:15:47.663 "trtype": "RDMA", 00:15:47.663 "adrfam": "IPv4", 00:15:47.663 "traddr": "192.168.100.8", 00:15:47.663 "trsvcid": "4420" 00:15:47.663 }, 00:15:47.663 "peer_address": { 00:15:47.663 "trtype": "RDMA", 00:15:47.663 "adrfam": "IPv4", 00:15:47.663 "traddr": "192.168.100.8", 00:15:47.663 "trsvcid": "40426" 00:15:47.663 }, 00:15:47.663 "auth": { 00:15:47.663 "state": "completed", 00:15:47.663 "digest": "sha384", 00:15:47.663 "dhgroup": "ffdhe3072" 00:15:47.663 } 00:15:47.663 } 00:15:47.663 ]' 00:15:47.663 14:50:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:47.663 14:50:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:47.663 14:50:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:47.663 14:50:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:47.663 14:50:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:47.663 14:50:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:47.663 14:50:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:47.663 14:50:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:47.920 14:50:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZGRlZmEyMzZhNWI2NTdhNDc3ZWVlNjYyYjE3MzhlODLmSnbZ: --dhchap-ctrl-secret DHHC-1:02:ZjkxZDM1NmU0ZTg1ZWYxY2QzOTE0Y2ZjOGVkOTY2MmI4NDU0ZjIzMzFlNTBlNTFiwS9w/Q==: 00:15:48.486 14:50:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:48.744 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:48.744 14:50:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:48.744 14:50:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.744 14:50:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.744 14:50:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.744 14:50:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:48.744 14:50:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:48.744 14:50:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:48.744 14:50:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:15:48.744 14:50:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:48.744 14:50:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:48.744 14:50:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:48.744 14:50:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:48.744 14:50:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:48.744 14:50:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:48.744 14:50:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.744 14:50:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.744 14:50:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.744 14:50:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:48.744 14:50:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:49.001 00:15:49.001 14:50:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:49.001 14:50:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:49.001 14:50:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:49.259 14:50:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.259 14:50:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:49.259 14:50:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.259 14:50:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.259 14:50:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.259 14:50:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:49.259 { 00:15:49.259 "cntlid": 69, 00:15:49.259 "qid": 0, 00:15:49.259 "state": "enabled", 00:15:49.259 "thread": "nvmf_tgt_poll_group_000", 00:15:49.259 "listen_address": { 00:15:49.259 "trtype": "RDMA", 00:15:49.259 "adrfam": "IPv4", 00:15:49.259 "traddr": "192.168.100.8", 00:15:49.259 "trsvcid": "4420" 00:15:49.259 }, 00:15:49.259 "peer_address": { 00:15:49.259 "trtype": "RDMA", 00:15:49.259 "adrfam": "IPv4", 00:15:49.259 "traddr": "192.168.100.8", 00:15:49.259 "trsvcid": "37778" 00:15:49.259 }, 00:15:49.259 "auth": { 00:15:49.259 "state": "completed", 00:15:49.259 "digest": "sha384", 00:15:49.259 "dhgroup": "ffdhe3072" 00:15:49.259 } 00:15:49.259 } 00:15:49.259 ]' 00:15:49.259 14:50:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:49.259 14:50:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:49.259 14:50:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:49.515 14:50:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:49.515 14:50:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:49.515 14:50:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:49.515 14:50:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:49.515 14:50:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:49.515 14:50:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:OWU1MjY5MzVmZTRkZGQyYzJjM2VjYTJkZmE1MjhiMTY1MTdmMWVkOWIwZjg1ODg0HgnKFQ==: --dhchap-ctrl-secret DHHC-1:01:ODZmMjBhOTQ4OTQzOTU0YjRjYzkxMTVlZDkzNjNiMDh6+1KJ: 00:15:50.448 14:50:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:50.448 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:50.449 14:50:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:50.449 14:50:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.449 14:50:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.449 14:50:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.449 14:50:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:50.449 14:50:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:50.449 14:50:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:50.449 14:50:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:15:50.449 14:50:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:50.449 14:50:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:50.449 14:50:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:50.449 14:50:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:50.449 14:50:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:50.449 14:50:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:15:50.449 14:50:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.449 14:50:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.449 14:50:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.449 14:50:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:50.449 14:50:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:50.705 00:15:50.705 14:50:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:50.705 14:50:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:50.705 14:50:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:50.962 14:50:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:50.962 14:50:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:50.962 14:50:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.962 14:50:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.962 14:50:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.962 14:50:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:50.962 { 00:15:50.962 "cntlid": 71, 00:15:50.962 "qid": 0, 00:15:50.962 "state": "enabled", 00:15:50.962 "thread": "nvmf_tgt_poll_group_000", 00:15:50.962 "listen_address": { 00:15:50.962 "trtype": "RDMA", 00:15:50.962 "adrfam": "IPv4", 00:15:50.962 "traddr": "192.168.100.8", 00:15:50.962 "trsvcid": "4420" 00:15:50.962 }, 00:15:50.962 "peer_address": { 00:15:50.962 "trtype": "RDMA", 00:15:50.962 "adrfam": "IPv4", 00:15:50.962 "traddr": "192.168.100.8", 00:15:50.962 "trsvcid": "57581" 00:15:50.962 }, 00:15:50.962 "auth": { 00:15:50.962 "state": "completed", 00:15:50.962 "digest": "sha384", 00:15:50.962 "dhgroup": "ffdhe3072" 00:15:50.962 } 00:15:50.962 } 00:15:50.962 ]' 00:15:50.962 14:50:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:50.962 14:50:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:50.962 14:50:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:50.962 14:50:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:50.962 14:50:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:50.962 14:50:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:50.962 14:50:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:51.219 14:50:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:51.219 14:50:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:ODA4MmE1ZGViNGE2YThkNGE1N2UwNTY3MTdkMTQ5NWMyYWRiZTlkZmM1YzY4MmVmNmFkODUzOTNjMDVkMTcyZjoJbsE=: 00:15:51.784 14:50:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:52.063 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:52.063 14:50:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:52.063 14:50:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.063 14:50:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.063 14:50:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.063 14:50:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:52.063 14:50:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:52.063 14:50:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:52.063 14:50:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:52.063 14:50:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:15:52.063 14:50:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:52.063 14:50:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:52.063 14:50:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:52.063 14:50:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:52.063 14:50:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:52.063 14:50:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:52.063 14:50:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.063 14:50:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.063 14:50:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.063 14:50:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:52.063 14:50:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:52.321 00:15:52.321 14:50:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:52.321 14:50:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:52.321 14:50:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:52.578 14:50:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:52.578 14:50:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:52.578 14:50:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.578 14:50:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.578 14:50:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.578 14:50:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:52.578 { 00:15:52.578 "cntlid": 73, 00:15:52.578 "qid": 0, 00:15:52.578 "state": "enabled", 00:15:52.578 "thread": "nvmf_tgt_poll_group_000", 00:15:52.578 "listen_address": { 00:15:52.578 "trtype": "RDMA", 00:15:52.578 "adrfam": "IPv4", 00:15:52.578 "traddr": "192.168.100.8", 00:15:52.578 "trsvcid": "4420" 00:15:52.578 }, 00:15:52.578 "peer_address": { 00:15:52.578 "trtype": "RDMA", 00:15:52.578 "adrfam": "IPv4", 00:15:52.578 "traddr": "192.168.100.8", 00:15:52.578 "trsvcid": "35578" 00:15:52.578 }, 00:15:52.578 "auth": { 00:15:52.578 "state": "completed", 00:15:52.578 "digest": "sha384", 00:15:52.578 "dhgroup": "ffdhe4096" 00:15:52.578 } 00:15:52.578 } 00:15:52.578 ]' 00:15:52.578 14:50:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:52.579 14:50:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:52.579 14:50:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:52.835 14:50:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:52.835 14:50:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:52.835 14:50:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:52.835 14:50:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:52.835 14:50:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:52.835 14:50:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:YThkOTBiZjczYmU0MDA3NjE3N2Q1MDFiODkwOWEzNGY3Mjc1YjNiNzEwM2JhODRm77kZEg==: --dhchap-ctrl-secret DHHC-1:03:NjM5MGE3MDc5MDk5MjdkZmUyOTc4YTBjZTNkOTJkY2ZkNDRhNTVjYmFkMDkwNmIzZjMxMzdkNDRiMmU1OTg0M5gPKx4=: 00:15:53.765 14:50:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:53.765 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:53.765 14:50:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:53.765 14:50:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.765 14:50:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.765 14:50:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.765 14:50:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:53.765 14:50:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:53.765 14:50:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:53.765 14:50:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:15:53.765 14:50:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:53.765 14:50:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:53.765 14:50:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:53.765 14:50:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:53.765 14:50:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:53.765 14:50:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:53.765 14:50:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.765 14:50:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.765 14:50:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.765 14:50:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:53.765 14:50:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:54.022 00:15:54.022 14:50:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:54.022 14:50:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:54.022 14:50:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:54.280 14:50:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:54.280 14:50:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:54.280 14:50:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.280 14:50:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.280 14:50:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.280 14:50:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:54.280 { 00:15:54.280 "cntlid": 75, 00:15:54.280 "qid": 0, 00:15:54.280 "state": "enabled", 00:15:54.280 "thread": "nvmf_tgt_poll_group_000", 00:15:54.280 "listen_address": { 00:15:54.280 "trtype": "RDMA", 00:15:54.280 "adrfam": "IPv4", 00:15:54.280 "traddr": "192.168.100.8", 00:15:54.280 "trsvcid": "4420" 00:15:54.280 }, 00:15:54.280 "peer_address": { 00:15:54.280 "trtype": "RDMA", 00:15:54.280 "adrfam": "IPv4", 00:15:54.280 "traddr": "192.168.100.8", 00:15:54.280 "trsvcid": "59279" 00:15:54.280 }, 00:15:54.280 "auth": { 00:15:54.280 "state": "completed", 00:15:54.280 "digest": "sha384", 00:15:54.280 "dhgroup": "ffdhe4096" 00:15:54.280 } 00:15:54.280 } 00:15:54.280 ]' 00:15:54.280 14:50:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:54.280 14:50:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:54.280 14:50:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:54.280 14:50:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:54.537 14:50:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:54.537 14:50:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:54.537 14:50:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:54.537 14:50:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:54.537 14:50:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZGRlZmEyMzZhNWI2NTdhNDc3ZWVlNjYyYjE3MzhlODLmSnbZ: --dhchap-ctrl-secret DHHC-1:02:ZjkxZDM1NmU0ZTg1ZWYxY2QzOTE0Y2ZjOGVkOTY2MmI4NDU0ZjIzMzFlNTBlNTFiwS9w/Q==: 00:15:55.469 14:50:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:55.469 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:55.469 14:50:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:55.469 14:50:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.469 14:50:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.469 14:50:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.469 14:50:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:55.469 14:50:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:55.470 14:50:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:55.470 14:50:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:15:55.470 14:50:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:55.470 14:50:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:55.470 14:50:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:55.470 14:50:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:55.470 14:50:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:55.470 14:50:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:55.470 14:50:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.470 14:50:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.470 14:50:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.470 14:50:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:55.470 14:50:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:55.727 00:15:55.727 14:50:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:55.727 14:50:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:55.727 14:50:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:55.984 14:50:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.985 14:50:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:55.985 14:50:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.985 14:50:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.985 14:50:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.985 14:50:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:55.985 { 00:15:55.985 "cntlid": 77, 00:15:55.985 "qid": 0, 00:15:55.985 "state": "enabled", 00:15:55.985 "thread": "nvmf_tgt_poll_group_000", 00:15:55.985 "listen_address": { 00:15:55.985 "trtype": "RDMA", 00:15:55.985 "adrfam": "IPv4", 00:15:55.985 "traddr": "192.168.100.8", 00:15:55.985 "trsvcid": "4420" 00:15:55.985 }, 00:15:55.985 "peer_address": { 00:15:55.985 "trtype": "RDMA", 00:15:55.985 "adrfam": "IPv4", 00:15:55.985 "traddr": "192.168.100.8", 00:15:55.985 "trsvcid": "37049" 00:15:55.985 }, 00:15:55.985 "auth": { 00:15:55.985 "state": "completed", 00:15:55.985 "digest": "sha384", 00:15:55.985 "dhgroup": "ffdhe4096" 00:15:55.985 } 00:15:55.985 } 00:15:55.985 ]' 00:15:55.985 14:50:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:55.985 14:50:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:55.985 14:50:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:55.985 14:50:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:55.985 14:50:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:56.242 14:50:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:56.242 14:50:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:56.242 14:50:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:56.242 14:50:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:OWU1MjY5MzVmZTRkZGQyYzJjM2VjYTJkZmE1MjhiMTY1MTdmMWVkOWIwZjg1ODg0HgnKFQ==: --dhchap-ctrl-secret DHHC-1:01:ODZmMjBhOTQ4OTQzOTU0YjRjYzkxMTVlZDkzNjNiMDh6+1KJ: 00:15:56.808 14:50:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:57.065 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:57.065 14:50:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:57.065 14:50:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.065 14:50:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.065 14:50:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.065 14:50:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:57.065 14:50:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:57.065 14:50:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:57.322 14:50:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:15:57.322 14:50:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:57.322 14:50:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:57.322 14:50:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:57.322 14:50:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:57.322 14:50:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:57.322 14:50:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:15:57.322 14:50:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.322 14:50:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.322 14:50:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.322 14:50:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:57.322 14:50:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:57.580 00:15:57.580 14:50:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:57.580 14:50:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:57.580 14:50:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:57.580 14:50:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:57.580 14:50:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:57.580 14:50:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.580 14:50:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.580 14:50:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.580 14:50:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:57.580 { 00:15:57.580 "cntlid": 79, 00:15:57.580 "qid": 0, 00:15:57.580 "state": "enabled", 00:15:57.580 "thread": "nvmf_tgt_poll_group_000", 00:15:57.580 "listen_address": { 00:15:57.580 "trtype": "RDMA", 00:15:57.580 "adrfam": "IPv4", 00:15:57.580 "traddr": "192.168.100.8", 00:15:57.580 "trsvcid": "4420" 00:15:57.580 }, 00:15:57.580 "peer_address": { 00:15:57.580 "trtype": "RDMA", 00:15:57.580 "adrfam": "IPv4", 00:15:57.580 "traddr": "192.168.100.8", 00:15:57.580 "trsvcid": "48145" 00:15:57.580 }, 00:15:57.580 "auth": { 00:15:57.580 "state": "completed", 00:15:57.580 "digest": "sha384", 00:15:57.580 "dhgroup": "ffdhe4096" 00:15:57.580 } 00:15:57.580 } 00:15:57.580 ]' 00:15:57.580 14:50:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:57.837 14:50:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:57.837 14:50:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:57.837 14:50:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:57.837 14:50:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:57.837 14:50:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:57.837 14:50:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:57.837 14:50:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:58.095 14:50:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:ODA4MmE1ZGViNGE2YThkNGE1N2UwNTY3MTdkMTQ5NWMyYWRiZTlkZmM1YzY4MmVmNmFkODUzOTNjMDVkMTcyZjoJbsE=: 00:15:58.660 14:50:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:58.660 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:58.660 14:50:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:58.660 14:50:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.660 14:50:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.660 14:50:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.660 14:50:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:58.660 14:50:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:58.660 14:50:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:58.660 14:50:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:58.918 14:50:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:15:58.918 14:50:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:58.918 14:50:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:58.918 14:50:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:58.918 14:50:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:58.918 14:50:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:58.918 14:50:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:58.918 14:50:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.918 14:50:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.918 14:50:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.918 14:50:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:58.918 14:50:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:59.175 00:15:59.175 14:50:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:59.175 14:50:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:59.175 14:50:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:59.431 14:50:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:59.431 14:50:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:59.431 14:50:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.431 14:50:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.431 14:50:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.431 14:50:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:59.431 { 00:15:59.431 "cntlid": 81, 00:15:59.431 "qid": 0, 00:15:59.431 "state": "enabled", 00:15:59.431 "thread": "nvmf_tgt_poll_group_000", 00:15:59.431 "listen_address": { 00:15:59.431 "trtype": "RDMA", 00:15:59.431 "adrfam": "IPv4", 00:15:59.431 "traddr": "192.168.100.8", 00:15:59.431 "trsvcid": "4420" 00:15:59.431 }, 00:15:59.431 "peer_address": { 00:15:59.431 "trtype": "RDMA", 00:15:59.431 "adrfam": "IPv4", 00:15:59.431 "traddr": "192.168.100.8", 00:15:59.431 "trsvcid": "55360" 00:15:59.431 }, 00:15:59.431 "auth": { 00:15:59.431 "state": "completed", 00:15:59.431 "digest": "sha384", 00:15:59.431 "dhgroup": "ffdhe6144" 00:15:59.431 } 00:15:59.431 } 00:15:59.431 ]' 00:15:59.431 14:50:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:59.431 14:50:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:59.431 14:50:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:59.431 14:50:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:59.431 14:50:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:59.431 14:50:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:59.431 14:50:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:59.431 14:50:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:59.687 14:50:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:YThkOTBiZjczYmU0MDA3NjE3N2Q1MDFiODkwOWEzNGY3Mjc1YjNiNzEwM2JhODRm77kZEg==: --dhchap-ctrl-secret DHHC-1:03:NjM5MGE3MDc5MDk5MjdkZmUyOTc4YTBjZTNkOTJkY2ZkNDRhNTVjYmFkMDkwNmIzZjMxMzdkNDRiMmU1OTg0M5gPKx4=: 00:16:00.251 14:50:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:00.509 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:00.509 14:50:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:00.509 14:50:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.509 14:50:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.509 14:50:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.509 14:50:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:00.509 14:50:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:00.509 14:50:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:00.509 14:50:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:16:00.509 14:50:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:00.509 14:50:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:00.509 14:50:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:00.509 14:50:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:00.509 14:50:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:00.509 14:50:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:00.509 14:50:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.509 14:50:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.509 14:50:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.509 14:50:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:00.509 14:50:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:01.074 00:16:01.074 14:50:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:01.074 14:50:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:01.074 14:50:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:01.074 14:50:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.074 14:50:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:01.074 14:50:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.074 14:50:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.074 14:50:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.074 14:50:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:01.074 { 00:16:01.074 "cntlid": 83, 00:16:01.074 "qid": 0, 00:16:01.074 "state": "enabled", 00:16:01.074 "thread": "nvmf_tgt_poll_group_000", 00:16:01.074 "listen_address": { 00:16:01.074 "trtype": "RDMA", 00:16:01.074 "adrfam": "IPv4", 00:16:01.074 "traddr": "192.168.100.8", 00:16:01.074 "trsvcid": "4420" 00:16:01.074 }, 00:16:01.074 "peer_address": { 00:16:01.074 "trtype": "RDMA", 00:16:01.074 "adrfam": "IPv4", 00:16:01.074 "traddr": "192.168.100.8", 00:16:01.074 "trsvcid": "40221" 00:16:01.074 }, 00:16:01.074 "auth": { 00:16:01.074 "state": "completed", 00:16:01.074 "digest": "sha384", 00:16:01.074 "dhgroup": "ffdhe6144" 00:16:01.074 } 00:16:01.074 } 00:16:01.074 ]' 00:16:01.074 14:50:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:01.074 14:50:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:01.074 14:50:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:01.332 14:50:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:01.332 14:50:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:01.332 14:50:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:01.332 14:50:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:01.332 14:50:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:01.332 14:50:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZGRlZmEyMzZhNWI2NTdhNDc3ZWVlNjYyYjE3MzhlODLmSnbZ: --dhchap-ctrl-secret DHHC-1:02:ZjkxZDM1NmU0ZTg1ZWYxY2QzOTE0Y2ZjOGVkOTY2MmI4NDU0ZjIzMzFlNTBlNTFiwS9w/Q==: 00:16:02.261 14:50:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:02.261 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:02.261 14:50:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:02.261 14:50:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.261 14:50:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.261 14:50:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.261 14:50:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:02.261 14:50:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:02.261 14:50:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:02.261 14:50:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:16:02.261 14:50:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:02.261 14:50:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:02.261 14:50:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:02.261 14:50:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:02.261 14:50:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:02.261 14:50:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:02.261 14:50:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.261 14:50:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.261 14:50:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.261 14:50:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:02.261 14:50:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:02.827 00:16:02.827 14:50:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:02.827 14:50:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:02.827 14:50:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:02.827 14:50:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:02.827 14:50:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:02.827 14:50:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.827 14:50:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.827 14:50:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.827 14:50:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:02.827 { 00:16:02.827 "cntlid": 85, 00:16:02.827 "qid": 0, 00:16:02.827 "state": "enabled", 00:16:02.827 "thread": "nvmf_tgt_poll_group_000", 00:16:02.827 "listen_address": { 00:16:02.827 "trtype": "RDMA", 00:16:02.827 "adrfam": "IPv4", 00:16:02.827 "traddr": "192.168.100.8", 00:16:02.827 "trsvcid": "4420" 00:16:02.827 }, 00:16:02.827 "peer_address": { 00:16:02.827 "trtype": "RDMA", 00:16:02.827 "adrfam": "IPv4", 00:16:02.827 "traddr": "192.168.100.8", 00:16:02.827 "trsvcid": "53721" 00:16:02.827 }, 00:16:02.827 "auth": { 00:16:02.827 "state": "completed", 00:16:02.827 "digest": "sha384", 00:16:02.827 "dhgroup": "ffdhe6144" 00:16:02.827 } 00:16:02.827 } 00:16:02.827 ]' 00:16:02.827 14:50:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:02.827 14:50:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:02.827 14:50:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:03.088 14:50:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:03.088 14:50:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:03.088 14:50:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:03.088 14:50:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:03.088 14:50:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:03.345 14:50:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:OWU1MjY5MzVmZTRkZGQyYzJjM2VjYTJkZmE1MjhiMTY1MTdmMWVkOWIwZjg1ODg0HgnKFQ==: --dhchap-ctrl-secret DHHC-1:01:ODZmMjBhOTQ4OTQzOTU0YjRjYzkxMTVlZDkzNjNiMDh6+1KJ: 00:16:03.910 14:50:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:03.910 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:03.910 14:50:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:03.910 14:50:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.910 14:50:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.910 14:50:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.910 14:50:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:03.910 14:50:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:03.910 14:50:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:04.168 14:50:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:16:04.168 14:50:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:04.168 14:50:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:04.168 14:50:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:04.168 14:50:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:04.168 14:50:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:04.168 14:50:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:16:04.168 14:50:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.168 14:50:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.168 14:50:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.168 14:50:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:04.168 14:50:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:04.425 00:16:04.425 14:50:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:04.425 14:50:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:04.425 14:50:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:04.683 14:50:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.683 14:50:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:04.683 14:50:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.683 14:50:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.683 14:50:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.684 14:50:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:04.684 { 00:16:04.684 "cntlid": 87, 00:16:04.684 "qid": 0, 00:16:04.684 "state": "enabled", 00:16:04.684 "thread": "nvmf_tgt_poll_group_000", 00:16:04.684 "listen_address": { 00:16:04.684 "trtype": "RDMA", 00:16:04.684 "adrfam": "IPv4", 00:16:04.684 "traddr": "192.168.100.8", 00:16:04.684 "trsvcid": "4420" 00:16:04.684 }, 00:16:04.684 "peer_address": { 00:16:04.684 "trtype": "RDMA", 00:16:04.684 "adrfam": "IPv4", 00:16:04.684 "traddr": "192.168.100.8", 00:16:04.684 "trsvcid": "49601" 00:16:04.684 }, 00:16:04.684 "auth": { 00:16:04.684 "state": "completed", 00:16:04.684 "digest": "sha384", 00:16:04.684 "dhgroup": "ffdhe6144" 00:16:04.684 } 00:16:04.684 } 00:16:04.684 ]' 00:16:04.684 14:50:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:04.684 14:50:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:04.684 14:50:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:04.684 14:50:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:04.684 14:50:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:04.684 14:50:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:04.684 14:50:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:04.684 14:50:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:04.942 14:50:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:ODA4MmE1ZGViNGE2YThkNGE1N2UwNTY3MTdkMTQ5NWMyYWRiZTlkZmM1YzY4MmVmNmFkODUzOTNjMDVkMTcyZjoJbsE=: 00:16:05.507 14:50:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:05.764 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:05.764 14:50:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:05.764 14:50:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.764 14:50:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.764 14:50:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.764 14:50:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:05.764 14:50:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:05.764 14:50:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:05.764 14:50:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:05.764 14:50:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:16:05.764 14:50:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:05.764 14:50:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:05.764 14:50:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:05.764 14:50:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:05.764 14:50:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:05.764 14:50:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:05.764 14:50:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.764 14:50:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.764 14:50:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.764 14:50:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:05.764 14:50:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:06.329 00:16:06.329 14:50:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:06.329 14:50:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:06.329 14:50:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:06.587 14:50:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:06.587 14:50:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:06.587 14:50:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.587 14:50:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.587 14:50:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.587 14:50:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:06.587 { 00:16:06.587 "cntlid": 89, 00:16:06.587 "qid": 0, 00:16:06.587 "state": "enabled", 00:16:06.587 "thread": "nvmf_tgt_poll_group_000", 00:16:06.587 "listen_address": { 00:16:06.587 "trtype": "RDMA", 00:16:06.587 "adrfam": "IPv4", 00:16:06.587 "traddr": "192.168.100.8", 00:16:06.587 "trsvcid": "4420" 00:16:06.587 }, 00:16:06.587 "peer_address": { 00:16:06.587 "trtype": "RDMA", 00:16:06.587 "adrfam": "IPv4", 00:16:06.587 "traddr": "192.168.100.8", 00:16:06.587 "trsvcid": "43630" 00:16:06.587 }, 00:16:06.587 "auth": { 00:16:06.587 "state": "completed", 00:16:06.587 "digest": "sha384", 00:16:06.587 "dhgroup": "ffdhe8192" 00:16:06.587 } 00:16:06.587 } 00:16:06.587 ]' 00:16:06.587 14:50:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:06.587 14:50:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:06.587 14:50:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:06.587 14:50:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:06.587 14:50:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:06.587 14:50:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:06.587 14:50:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:06.587 14:50:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:06.845 14:50:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:YThkOTBiZjczYmU0MDA3NjE3N2Q1MDFiODkwOWEzNGY3Mjc1YjNiNzEwM2JhODRm77kZEg==: --dhchap-ctrl-secret DHHC-1:03:NjM5MGE3MDc5MDk5MjdkZmUyOTc4YTBjZTNkOTJkY2ZkNDRhNTVjYmFkMDkwNmIzZjMxMzdkNDRiMmU1OTg0M5gPKx4=: 00:16:07.411 14:50:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:07.669 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:07.669 14:50:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:07.669 14:50:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.669 14:50:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.669 14:50:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.669 14:50:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:07.669 14:50:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:07.669 14:50:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:07.669 14:50:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:16:07.669 14:50:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:07.669 14:50:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:07.669 14:50:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:07.669 14:50:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:07.669 14:50:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:07.669 14:50:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:07.669 14:50:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.669 14:50:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.669 14:50:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.669 14:50:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:07.669 14:50:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:08.235 00:16:08.235 14:50:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:08.235 14:50:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:08.235 14:50:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:08.492 14:50:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:08.493 14:50:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:08.493 14:50:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.493 14:50:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.493 14:50:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.493 14:50:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:08.493 { 00:16:08.493 "cntlid": 91, 00:16:08.493 "qid": 0, 00:16:08.493 "state": "enabled", 00:16:08.493 "thread": "nvmf_tgt_poll_group_000", 00:16:08.493 "listen_address": { 00:16:08.493 "trtype": "RDMA", 00:16:08.493 "adrfam": "IPv4", 00:16:08.493 "traddr": "192.168.100.8", 00:16:08.493 "trsvcid": "4420" 00:16:08.493 }, 00:16:08.493 "peer_address": { 00:16:08.493 "trtype": "RDMA", 00:16:08.493 "adrfam": "IPv4", 00:16:08.493 "traddr": "192.168.100.8", 00:16:08.493 "trsvcid": "46228" 00:16:08.493 }, 00:16:08.493 "auth": { 00:16:08.493 "state": "completed", 00:16:08.493 "digest": "sha384", 00:16:08.493 "dhgroup": "ffdhe8192" 00:16:08.493 } 00:16:08.493 } 00:16:08.493 ]' 00:16:08.493 14:50:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:08.493 14:50:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:08.493 14:50:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:08.493 14:50:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:08.493 14:50:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:08.493 14:50:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:08.493 14:50:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:08.493 14:50:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:08.750 14:50:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZGRlZmEyMzZhNWI2NTdhNDc3ZWVlNjYyYjE3MzhlODLmSnbZ: --dhchap-ctrl-secret DHHC-1:02:ZjkxZDM1NmU0ZTg1ZWYxY2QzOTE0Y2ZjOGVkOTY2MmI4NDU0ZjIzMzFlNTBlNTFiwS9w/Q==: 00:16:09.316 14:50:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:09.316 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:09.316 14:50:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:09.316 14:50:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.316 14:50:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.316 14:50:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.316 14:50:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:09.316 14:50:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:09.316 14:50:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:09.580 14:50:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:16:09.580 14:50:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:09.580 14:50:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:09.580 14:50:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:09.580 14:50:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:09.580 14:50:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:09.580 14:50:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:09.580 14:50:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.580 14:50:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.580 14:50:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.580 14:50:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:09.580 14:50:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:10.143 00:16:10.143 14:50:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:10.143 14:50:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:10.143 14:50:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:10.143 14:50:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.143 14:50:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:10.143 14:50:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.143 14:50:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.401 14:50:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.401 14:50:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:10.401 { 00:16:10.401 "cntlid": 93, 00:16:10.401 "qid": 0, 00:16:10.401 "state": "enabled", 00:16:10.401 "thread": "nvmf_tgt_poll_group_000", 00:16:10.401 "listen_address": { 00:16:10.401 "trtype": "RDMA", 00:16:10.401 "adrfam": "IPv4", 00:16:10.401 "traddr": "192.168.100.8", 00:16:10.401 "trsvcid": "4420" 00:16:10.401 }, 00:16:10.401 "peer_address": { 00:16:10.401 "trtype": "RDMA", 00:16:10.401 "adrfam": "IPv4", 00:16:10.401 "traddr": "192.168.100.8", 00:16:10.401 "trsvcid": "36618" 00:16:10.401 }, 00:16:10.401 "auth": { 00:16:10.401 "state": "completed", 00:16:10.401 "digest": "sha384", 00:16:10.401 "dhgroup": "ffdhe8192" 00:16:10.401 } 00:16:10.401 } 00:16:10.401 ]' 00:16:10.401 14:50:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:10.401 14:50:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:10.401 14:50:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:10.401 14:50:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:10.401 14:50:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:10.401 14:50:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:10.401 14:50:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:10.401 14:50:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:10.658 14:50:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:OWU1MjY5MzVmZTRkZGQyYzJjM2VjYTJkZmE1MjhiMTY1MTdmMWVkOWIwZjg1ODg0HgnKFQ==: --dhchap-ctrl-secret DHHC-1:01:ODZmMjBhOTQ4OTQzOTU0YjRjYzkxMTVlZDkzNjNiMDh6+1KJ: 00:16:11.224 14:50:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:11.224 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:11.224 14:50:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:11.224 14:50:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.224 14:50:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.224 14:50:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.224 14:50:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:11.224 14:50:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:11.224 14:50:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:11.482 14:50:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:16:11.482 14:50:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:11.482 14:50:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:11.482 14:50:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:11.482 14:50:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:11.482 14:50:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:11.482 14:50:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:16:11.482 14:50:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.482 14:50:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.482 14:50:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.482 14:50:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:11.482 14:50:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:12.047 00:16:12.047 14:50:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:12.047 14:50:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:12.047 14:50:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:12.047 14:50:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.047 14:50:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:12.047 14:50:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.047 14:50:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.047 14:50:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.047 14:50:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:12.047 { 00:16:12.047 "cntlid": 95, 00:16:12.047 "qid": 0, 00:16:12.047 "state": "enabled", 00:16:12.047 "thread": "nvmf_tgt_poll_group_000", 00:16:12.047 "listen_address": { 00:16:12.047 "trtype": "RDMA", 00:16:12.047 "adrfam": "IPv4", 00:16:12.047 "traddr": "192.168.100.8", 00:16:12.047 "trsvcid": "4420" 00:16:12.047 }, 00:16:12.047 "peer_address": { 00:16:12.047 "trtype": "RDMA", 00:16:12.047 "adrfam": "IPv4", 00:16:12.047 "traddr": "192.168.100.8", 00:16:12.047 "trsvcid": "38676" 00:16:12.047 }, 00:16:12.047 "auth": { 00:16:12.047 "state": "completed", 00:16:12.047 "digest": "sha384", 00:16:12.047 "dhgroup": "ffdhe8192" 00:16:12.047 } 00:16:12.047 } 00:16:12.047 ]' 00:16:12.047 14:50:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:12.305 14:50:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:12.305 14:50:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:12.305 14:50:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:12.305 14:50:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:12.305 14:50:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.305 14:50:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.305 14:50:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.617 14:50:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:ODA4MmE1ZGViNGE2YThkNGE1N2UwNTY3MTdkMTQ5NWMyYWRiZTlkZmM1YzY4MmVmNmFkODUzOTNjMDVkMTcyZjoJbsE=: 00:16:13.258 14:50:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.258 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.258 14:50:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:13.258 14:50:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.258 14:50:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.258 14:50:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.258 14:50:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:16:13.258 14:50:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:13.258 14:50:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:13.258 14:50:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:13.258 14:50:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:13.258 14:50:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:16:13.259 14:50:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:13.259 14:50:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:13.259 14:50:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:13.259 14:50:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:13.259 14:50:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:13.259 14:50:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.259 14:50:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.259 14:50:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.259 14:50:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.259 14:50:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.259 14:50:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.516 00:16:13.516 14:50:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:13.516 14:50:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:13.516 14:50:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:13.774 14:50:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.774 14:50:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:13.774 14:50:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.774 14:50:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.774 14:50:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.774 14:50:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:13.774 { 00:16:13.774 "cntlid": 97, 00:16:13.774 "qid": 0, 00:16:13.774 "state": "enabled", 00:16:13.774 "thread": "nvmf_tgt_poll_group_000", 00:16:13.774 "listen_address": { 00:16:13.774 "trtype": "RDMA", 00:16:13.774 "adrfam": "IPv4", 00:16:13.774 "traddr": "192.168.100.8", 00:16:13.774 "trsvcid": "4420" 00:16:13.774 }, 00:16:13.774 "peer_address": { 00:16:13.774 "trtype": "RDMA", 00:16:13.774 "adrfam": "IPv4", 00:16:13.774 "traddr": "192.168.100.8", 00:16:13.774 "trsvcid": "49990" 00:16:13.774 }, 00:16:13.774 "auth": { 00:16:13.774 "state": "completed", 00:16:13.774 "digest": "sha512", 00:16:13.774 "dhgroup": "null" 00:16:13.774 } 00:16:13.774 } 00:16:13.774 ]' 00:16:13.774 14:50:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:13.774 14:50:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:13.774 14:50:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:13.774 14:50:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:13.774 14:50:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:14.033 14:50:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:14.033 14:50:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:14.033 14:50:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:14.033 14:50:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:YThkOTBiZjczYmU0MDA3NjE3N2Q1MDFiODkwOWEzNGY3Mjc1YjNiNzEwM2JhODRm77kZEg==: --dhchap-ctrl-secret DHHC-1:03:NjM5MGE3MDc5MDk5MjdkZmUyOTc4YTBjZTNkOTJkY2ZkNDRhNTVjYmFkMDkwNmIzZjMxMzdkNDRiMmU1OTg0M5gPKx4=: 00:16:14.600 14:50:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:14.858 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:14.858 14:50:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:14.858 14:50:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.858 14:50:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.858 14:50:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.858 14:50:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:14.858 14:50:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:14.858 14:50:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:15.116 14:50:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:16:15.116 14:50:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:15.116 14:50:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:15.116 14:50:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:15.116 14:50:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:15.116 14:50:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:15.116 14:50:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:15.116 14:50:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.116 14:50:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.116 14:50:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.116 14:50:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:15.116 14:50:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:15.374 00:16:15.374 14:50:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:15.374 14:50:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:15.374 14:50:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:15.374 14:50:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.374 14:50:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:15.374 14:50:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.374 14:50:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.374 14:50:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.374 14:50:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:15.374 { 00:16:15.374 "cntlid": 99, 00:16:15.374 "qid": 0, 00:16:15.374 "state": "enabled", 00:16:15.374 "thread": "nvmf_tgt_poll_group_000", 00:16:15.374 "listen_address": { 00:16:15.374 "trtype": "RDMA", 00:16:15.374 "adrfam": "IPv4", 00:16:15.374 "traddr": "192.168.100.8", 00:16:15.374 "trsvcid": "4420" 00:16:15.374 }, 00:16:15.374 "peer_address": { 00:16:15.374 "trtype": "RDMA", 00:16:15.374 "adrfam": "IPv4", 00:16:15.374 "traddr": "192.168.100.8", 00:16:15.374 "trsvcid": "38030" 00:16:15.374 }, 00:16:15.374 "auth": { 00:16:15.374 "state": "completed", 00:16:15.374 "digest": "sha512", 00:16:15.374 "dhgroup": "null" 00:16:15.374 } 00:16:15.374 } 00:16:15.374 ]' 00:16:15.374 14:50:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:15.631 14:50:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:15.631 14:50:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:15.631 14:50:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:15.631 14:50:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:15.631 14:50:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:15.631 14:50:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:15.631 14:50:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:15.888 14:50:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZGRlZmEyMzZhNWI2NTdhNDc3ZWVlNjYyYjE3MzhlODLmSnbZ: --dhchap-ctrl-secret DHHC-1:02:ZjkxZDM1NmU0ZTg1ZWYxY2QzOTE0Y2ZjOGVkOTY2MmI4NDU0ZjIzMzFlNTBlNTFiwS9w/Q==: 00:16:16.452 14:50:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.452 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.452 14:50:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:16.452 14:50:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.452 14:50:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.452 14:50:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.452 14:50:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:16.452 14:50:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:16.452 14:50:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:16.710 14:50:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:16:16.710 14:50:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:16.710 14:50:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:16.710 14:50:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:16.710 14:50:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:16.710 14:50:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:16.710 14:50:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:16.710 14:50:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.710 14:50:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.710 14:50:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.710 14:50:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:16.710 14:50:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:16.966 00:16:16.966 14:50:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:16.966 14:50:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:16.966 14:50:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:17.224 14:50:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.224 14:50:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:17.224 14:50:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.224 14:50:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.224 14:50:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.224 14:50:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:17.224 { 00:16:17.224 "cntlid": 101, 00:16:17.224 "qid": 0, 00:16:17.224 "state": "enabled", 00:16:17.224 "thread": "nvmf_tgt_poll_group_000", 00:16:17.224 "listen_address": { 00:16:17.224 "trtype": "RDMA", 00:16:17.224 "adrfam": "IPv4", 00:16:17.224 "traddr": "192.168.100.8", 00:16:17.224 "trsvcid": "4420" 00:16:17.224 }, 00:16:17.224 "peer_address": { 00:16:17.224 "trtype": "RDMA", 00:16:17.224 "adrfam": "IPv4", 00:16:17.224 "traddr": "192.168.100.8", 00:16:17.224 "trsvcid": "45596" 00:16:17.224 }, 00:16:17.224 "auth": { 00:16:17.224 "state": "completed", 00:16:17.224 "digest": "sha512", 00:16:17.224 "dhgroup": "null" 00:16:17.224 } 00:16:17.224 } 00:16:17.224 ]' 00:16:17.224 14:50:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:17.224 14:50:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:17.224 14:50:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:17.224 14:50:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:17.224 14:50:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:17.224 14:50:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:17.224 14:50:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:17.224 14:50:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:17.482 14:50:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:OWU1MjY5MzVmZTRkZGQyYzJjM2VjYTJkZmE1MjhiMTY1MTdmMWVkOWIwZjg1ODg0HgnKFQ==: --dhchap-ctrl-secret DHHC-1:01:ODZmMjBhOTQ4OTQzOTU0YjRjYzkxMTVlZDkzNjNiMDh6+1KJ: 00:16:18.048 14:50:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:18.048 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:18.048 14:50:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:18.048 14:50:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.048 14:50:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.305 14:50:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.305 14:50:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:18.305 14:50:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:18.305 14:50:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:18.305 14:50:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:16:18.305 14:50:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:18.305 14:50:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:18.305 14:50:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:18.305 14:50:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:18.305 14:50:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:18.305 14:50:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:16:18.305 14:50:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.305 14:50:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.305 14:50:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.305 14:50:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:18.305 14:50:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:18.561 00:16:18.561 14:50:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:18.561 14:50:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:18.561 14:50:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.817 14:50:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.817 14:50:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:18.817 14:50:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.817 14:50:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.817 14:50:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.817 14:50:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:18.817 { 00:16:18.817 "cntlid": 103, 00:16:18.817 "qid": 0, 00:16:18.817 "state": "enabled", 00:16:18.817 "thread": "nvmf_tgt_poll_group_000", 00:16:18.817 "listen_address": { 00:16:18.817 "trtype": "RDMA", 00:16:18.817 "adrfam": "IPv4", 00:16:18.817 "traddr": "192.168.100.8", 00:16:18.817 "trsvcid": "4420" 00:16:18.817 }, 00:16:18.817 "peer_address": { 00:16:18.817 "trtype": "RDMA", 00:16:18.817 "adrfam": "IPv4", 00:16:18.817 "traddr": "192.168.100.8", 00:16:18.817 "trsvcid": "37181" 00:16:18.817 }, 00:16:18.817 "auth": { 00:16:18.817 "state": "completed", 00:16:18.817 "digest": "sha512", 00:16:18.817 "dhgroup": "null" 00:16:18.817 } 00:16:18.817 } 00:16:18.817 ]' 00:16:18.817 14:50:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:18.817 14:50:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:18.817 14:50:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:18.817 14:50:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:18.817 14:50:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:18.817 14:50:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:18.817 14:50:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:18.817 14:50:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:19.075 14:50:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:ODA4MmE1ZGViNGE2YThkNGE1N2UwNTY3MTdkMTQ5NWMyYWRiZTlkZmM1YzY4MmVmNmFkODUzOTNjMDVkMTcyZjoJbsE=: 00:16:19.639 14:50:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:19.639 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:19.639 14:50:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:19.639 14:50:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.639 14:50:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.896 14:50:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.896 14:50:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:19.896 14:50:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:19.896 14:50:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:19.896 14:50:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:19.896 14:50:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:16:19.896 14:50:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:19.896 14:50:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:19.896 14:50:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:19.896 14:50:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:19.896 14:50:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:19.896 14:50:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:19.896 14:50:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.896 14:50:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.896 14:50:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.896 14:50:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:19.896 14:50:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.153 00:16:20.153 14:50:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:20.153 14:50:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:20.153 14:50:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.410 14:50:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.410 14:50:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:20.410 14:50:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.410 14:50:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.410 14:50:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.410 14:50:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:20.410 { 00:16:20.410 "cntlid": 105, 00:16:20.410 "qid": 0, 00:16:20.410 "state": "enabled", 00:16:20.410 "thread": "nvmf_tgt_poll_group_000", 00:16:20.410 "listen_address": { 00:16:20.410 "trtype": "RDMA", 00:16:20.410 "adrfam": "IPv4", 00:16:20.410 "traddr": "192.168.100.8", 00:16:20.410 "trsvcid": "4420" 00:16:20.410 }, 00:16:20.410 "peer_address": { 00:16:20.410 "trtype": "RDMA", 00:16:20.410 "adrfam": "IPv4", 00:16:20.410 "traddr": "192.168.100.8", 00:16:20.410 "trsvcid": "52709" 00:16:20.410 }, 00:16:20.410 "auth": { 00:16:20.410 "state": "completed", 00:16:20.410 "digest": "sha512", 00:16:20.410 "dhgroup": "ffdhe2048" 00:16:20.410 } 00:16:20.410 } 00:16:20.410 ]' 00:16:20.410 14:50:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:20.410 14:50:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:20.410 14:50:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:20.410 14:50:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:20.410 14:50:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:20.410 14:50:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:20.410 14:50:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:20.410 14:50:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:20.667 14:50:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:YThkOTBiZjczYmU0MDA3NjE3N2Q1MDFiODkwOWEzNGY3Mjc1YjNiNzEwM2JhODRm77kZEg==: --dhchap-ctrl-secret DHHC-1:03:NjM5MGE3MDc5MDk5MjdkZmUyOTc4YTBjZTNkOTJkY2ZkNDRhNTVjYmFkMDkwNmIzZjMxMzdkNDRiMmU1OTg0M5gPKx4=: 00:16:21.230 14:50:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.488 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.488 14:50:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:21.488 14:50:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.488 14:50:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.488 14:50:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.488 14:50:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:21.488 14:50:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:21.488 14:50:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:21.488 14:50:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:16:21.488 14:50:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:21.488 14:50:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:21.488 14:50:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:21.488 14:50:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:21.488 14:50:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:21.488 14:50:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.488 14:50:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.488 14:50:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.745 14:50:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.745 14:50:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.745 14:50:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.745 00:16:21.745 14:50:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:21.746 14:50:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:21.746 14:50:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:22.003 14:50:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.003 14:50:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:22.003 14:50:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.003 14:50:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.003 14:50:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.003 14:50:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:22.003 { 00:16:22.003 "cntlid": 107, 00:16:22.003 "qid": 0, 00:16:22.003 "state": "enabled", 00:16:22.003 "thread": "nvmf_tgt_poll_group_000", 00:16:22.003 "listen_address": { 00:16:22.003 "trtype": "RDMA", 00:16:22.003 "adrfam": "IPv4", 00:16:22.003 "traddr": "192.168.100.8", 00:16:22.003 "trsvcid": "4420" 00:16:22.003 }, 00:16:22.003 "peer_address": { 00:16:22.003 "trtype": "RDMA", 00:16:22.003 "adrfam": "IPv4", 00:16:22.003 "traddr": "192.168.100.8", 00:16:22.003 "trsvcid": "52787" 00:16:22.003 }, 00:16:22.003 "auth": { 00:16:22.003 "state": "completed", 00:16:22.003 "digest": "sha512", 00:16:22.003 "dhgroup": "ffdhe2048" 00:16:22.003 } 00:16:22.003 } 00:16:22.003 ]' 00:16:22.003 14:50:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:22.003 14:50:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:22.003 14:50:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:22.003 14:50:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:22.003 14:50:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:22.261 14:50:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:22.261 14:50:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:22.261 14:50:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.261 14:50:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZGRlZmEyMzZhNWI2NTdhNDc3ZWVlNjYyYjE3MzhlODLmSnbZ: --dhchap-ctrl-secret DHHC-1:02:ZjkxZDM1NmU0ZTg1ZWYxY2QzOTE0Y2ZjOGVkOTY2MmI4NDU0ZjIzMzFlNTBlNTFiwS9w/Q==: 00:16:23.193 14:50:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:23.193 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:23.193 14:50:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:23.193 14:50:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.193 14:50:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.193 14:50:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.193 14:50:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:23.193 14:50:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:23.193 14:50:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:23.193 14:50:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:16:23.193 14:50:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:23.193 14:50:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:23.193 14:50:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:23.193 14:50:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:23.193 14:50:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:23.193 14:50:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.193 14:50:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.193 14:50:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.193 14:50:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.193 14:50:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.193 14:50:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.450 00:16:23.450 14:50:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:23.450 14:50:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:23.450 14:50:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.709 14:50:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.709 14:50:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.709 14:50:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.709 14:50:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.709 14:50:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.709 14:50:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:23.709 { 00:16:23.709 "cntlid": 109, 00:16:23.709 "qid": 0, 00:16:23.709 "state": "enabled", 00:16:23.709 "thread": "nvmf_tgt_poll_group_000", 00:16:23.709 "listen_address": { 00:16:23.709 "trtype": "RDMA", 00:16:23.709 "adrfam": "IPv4", 00:16:23.709 "traddr": "192.168.100.8", 00:16:23.709 "trsvcid": "4420" 00:16:23.709 }, 00:16:23.709 "peer_address": { 00:16:23.709 "trtype": "RDMA", 00:16:23.709 "adrfam": "IPv4", 00:16:23.709 "traddr": "192.168.100.8", 00:16:23.709 "trsvcid": "44551" 00:16:23.709 }, 00:16:23.709 "auth": { 00:16:23.709 "state": "completed", 00:16:23.709 "digest": "sha512", 00:16:23.709 "dhgroup": "ffdhe2048" 00:16:23.709 } 00:16:23.709 } 00:16:23.709 ]' 00:16:23.709 14:50:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:23.709 14:50:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:23.709 14:50:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:23.709 14:50:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:23.709 14:50:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:23.709 14:50:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.709 14:50:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.709 14:50:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:23.967 14:50:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:OWU1MjY5MzVmZTRkZGQyYzJjM2VjYTJkZmE1MjhiMTY1MTdmMWVkOWIwZjg1ODg0HgnKFQ==: --dhchap-ctrl-secret DHHC-1:01:ODZmMjBhOTQ4OTQzOTU0YjRjYzkxMTVlZDkzNjNiMDh6+1KJ: 00:16:24.532 14:50:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.789 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.789 14:50:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:24.789 14:50:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.789 14:50:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.789 14:50:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.789 14:50:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:24.789 14:50:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:24.789 14:50:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:24.789 14:50:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:16:24.789 14:50:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:24.789 14:50:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:24.789 14:50:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:24.789 14:50:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:24.789 14:50:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:24.789 14:50:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:16:24.789 14:50:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.789 14:50:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.789 14:50:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.789 14:50:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:24.789 14:50:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:25.046 00:16:25.046 14:50:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:25.046 14:50:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:25.046 14:50:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.304 14:50:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.304 14:50:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.304 14:50:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.304 14:50:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.304 14:50:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.304 14:50:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:25.304 { 00:16:25.304 "cntlid": 111, 00:16:25.304 "qid": 0, 00:16:25.304 "state": "enabled", 00:16:25.304 "thread": "nvmf_tgt_poll_group_000", 00:16:25.304 "listen_address": { 00:16:25.304 "trtype": "RDMA", 00:16:25.304 "adrfam": "IPv4", 00:16:25.304 "traddr": "192.168.100.8", 00:16:25.304 "trsvcid": "4420" 00:16:25.304 }, 00:16:25.304 "peer_address": { 00:16:25.304 "trtype": "RDMA", 00:16:25.304 "adrfam": "IPv4", 00:16:25.304 "traddr": "192.168.100.8", 00:16:25.304 "trsvcid": "34685" 00:16:25.304 }, 00:16:25.304 "auth": { 00:16:25.304 "state": "completed", 00:16:25.304 "digest": "sha512", 00:16:25.304 "dhgroup": "ffdhe2048" 00:16:25.304 } 00:16:25.304 } 00:16:25.304 ]' 00:16:25.304 14:50:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:25.304 14:50:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:25.304 14:50:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:25.304 14:50:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:25.304 14:50:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:25.304 14:50:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.304 14:50:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.304 14:50:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.562 14:50:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:ODA4MmE1ZGViNGE2YThkNGE1N2UwNTY3MTdkMTQ5NWMyYWRiZTlkZmM1YzY4MmVmNmFkODUzOTNjMDVkMTcyZjoJbsE=: 00:16:26.126 14:50:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.384 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.384 14:51:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:26.384 14:51:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.384 14:51:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.384 14:51:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.384 14:51:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:26.384 14:51:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:26.384 14:51:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:26.384 14:51:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:26.384 14:51:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:16:26.384 14:51:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:26.384 14:51:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:26.384 14:51:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:26.384 14:51:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:26.384 14:51:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.384 14:51:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.384 14:51:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.384 14:51:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.384 14:51:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.384 14:51:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.384 14:51:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.641 00:16:26.641 14:51:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:26.641 14:51:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:26.641 14:51:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.899 14:51:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.899 14:51:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.899 14:51:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.899 14:51:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.899 14:51:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.899 14:51:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:26.899 { 00:16:26.899 "cntlid": 113, 00:16:26.899 "qid": 0, 00:16:26.899 "state": "enabled", 00:16:26.899 "thread": "nvmf_tgt_poll_group_000", 00:16:26.899 "listen_address": { 00:16:26.899 "trtype": "RDMA", 00:16:26.899 "adrfam": "IPv4", 00:16:26.899 "traddr": "192.168.100.8", 00:16:26.899 "trsvcid": "4420" 00:16:26.899 }, 00:16:26.899 "peer_address": { 00:16:26.899 "trtype": "RDMA", 00:16:26.899 "adrfam": "IPv4", 00:16:26.899 "traddr": "192.168.100.8", 00:16:26.899 "trsvcid": "34627" 00:16:26.899 }, 00:16:26.899 "auth": { 00:16:26.899 "state": "completed", 00:16:26.899 "digest": "sha512", 00:16:26.899 "dhgroup": "ffdhe3072" 00:16:26.899 } 00:16:26.899 } 00:16:26.899 ]' 00:16:26.899 14:51:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:26.899 14:51:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:26.899 14:51:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:27.157 14:51:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:27.157 14:51:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:27.157 14:51:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:27.157 14:51:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:27.157 14:51:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.157 14:51:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:YThkOTBiZjczYmU0MDA3NjE3N2Q1MDFiODkwOWEzNGY3Mjc1YjNiNzEwM2JhODRm77kZEg==: --dhchap-ctrl-secret DHHC-1:03:NjM5MGE3MDc5MDk5MjdkZmUyOTc4YTBjZTNkOTJkY2ZkNDRhNTVjYmFkMDkwNmIzZjMxMzdkNDRiMmU1OTg0M5gPKx4=: 00:16:28.090 14:51:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:28.090 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:28.090 14:51:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:28.090 14:51:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.090 14:51:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.090 14:51:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.090 14:51:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:28.090 14:51:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:28.090 14:51:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:28.090 14:51:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:16:28.090 14:51:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:28.090 14:51:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:28.090 14:51:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:28.091 14:51:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:28.091 14:51:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:28.091 14:51:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.091 14:51:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.091 14:51:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.091 14:51:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.091 14:51:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.091 14:51:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.348 00:16:28.348 14:51:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:28.348 14:51:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:28.348 14:51:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.606 14:51:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.606 14:51:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.606 14:51:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.606 14:51:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.606 14:51:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.606 14:51:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:28.606 { 00:16:28.606 "cntlid": 115, 00:16:28.606 "qid": 0, 00:16:28.606 "state": "enabled", 00:16:28.606 "thread": "nvmf_tgt_poll_group_000", 00:16:28.606 "listen_address": { 00:16:28.606 "trtype": "RDMA", 00:16:28.606 "adrfam": "IPv4", 00:16:28.606 "traddr": "192.168.100.8", 00:16:28.606 "trsvcid": "4420" 00:16:28.606 }, 00:16:28.606 "peer_address": { 00:16:28.606 "trtype": "RDMA", 00:16:28.606 "adrfam": "IPv4", 00:16:28.606 "traddr": "192.168.100.8", 00:16:28.606 "trsvcid": "59271" 00:16:28.606 }, 00:16:28.606 "auth": { 00:16:28.606 "state": "completed", 00:16:28.606 "digest": "sha512", 00:16:28.606 "dhgroup": "ffdhe3072" 00:16:28.606 } 00:16:28.606 } 00:16:28.606 ]' 00:16:28.606 14:51:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:28.606 14:51:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:28.606 14:51:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:28.606 14:51:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:28.606 14:51:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:28.865 14:51:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.865 14:51:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.865 14:51:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.865 14:51:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZGRlZmEyMzZhNWI2NTdhNDc3ZWVlNjYyYjE3MzhlODLmSnbZ: --dhchap-ctrl-secret DHHC-1:02:ZjkxZDM1NmU0ZTg1ZWYxY2QzOTE0Y2ZjOGVkOTY2MmI4NDU0ZjIzMzFlNTBlNTFiwS9w/Q==: 00:16:29.437 14:51:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.699 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.699 14:51:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:29.699 14:51:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.699 14:51:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.699 14:51:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.699 14:51:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:29.699 14:51:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:29.699 14:51:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:29.956 14:51:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:16:29.956 14:51:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:29.956 14:51:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:29.956 14:51:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:29.956 14:51:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:29.956 14:51:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.956 14:51:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.956 14:51:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.956 14:51:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.956 14:51:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.956 14:51:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.956 14:51:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.956 00:16:29.956 14:51:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:29.956 14:51:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:29.956 14:51:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.213 14:51:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.213 14:51:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.213 14:51:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.213 14:51:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.213 14:51:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.213 14:51:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:30.213 { 00:16:30.213 "cntlid": 117, 00:16:30.213 "qid": 0, 00:16:30.213 "state": "enabled", 00:16:30.213 "thread": "nvmf_tgt_poll_group_000", 00:16:30.213 "listen_address": { 00:16:30.213 "trtype": "RDMA", 00:16:30.213 "adrfam": "IPv4", 00:16:30.213 "traddr": "192.168.100.8", 00:16:30.213 "trsvcid": "4420" 00:16:30.213 }, 00:16:30.213 "peer_address": { 00:16:30.213 "trtype": "RDMA", 00:16:30.213 "adrfam": "IPv4", 00:16:30.213 "traddr": "192.168.100.8", 00:16:30.213 "trsvcid": "51054" 00:16:30.213 }, 00:16:30.213 "auth": { 00:16:30.213 "state": "completed", 00:16:30.213 "digest": "sha512", 00:16:30.213 "dhgroup": "ffdhe3072" 00:16:30.213 } 00:16:30.213 } 00:16:30.213 ]' 00:16:30.213 14:51:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:30.213 14:51:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:30.213 14:51:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:30.470 14:51:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:30.470 14:51:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:30.470 14:51:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.470 14:51:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.470 14:51:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.470 14:51:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:OWU1MjY5MzVmZTRkZGQyYzJjM2VjYTJkZmE1MjhiMTY1MTdmMWVkOWIwZjg1ODg0HgnKFQ==: --dhchap-ctrl-secret DHHC-1:01:ODZmMjBhOTQ4OTQzOTU0YjRjYzkxMTVlZDkzNjNiMDh6+1KJ: 00:16:31.403 14:51:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.403 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.403 14:51:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:31.403 14:51:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.403 14:51:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.403 14:51:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.403 14:51:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:31.403 14:51:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:31.403 14:51:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:31.403 14:51:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:16:31.403 14:51:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:31.403 14:51:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:31.403 14:51:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:31.403 14:51:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:31.403 14:51:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.403 14:51:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:16:31.403 14:51:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.403 14:51:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.403 14:51:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.403 14:51:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:31.403 14:51:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:31.660 00:16:31.660 14:51:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:31.660 14:51:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:31.660 14:51:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.918 14:51:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.918 14:51:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.918 14:51:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.918 14:51:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.918 14:51:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.918 14:51:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:31.918 { 00:16:31.918 "cntlid": 119, 00:16:31.918 "qid": 0, 00:16:31.918 "state": "enabled", 00:16:31.918 "thread": "nvmf_tgt_poll_group_000", 00:16:31.918 "listen_address": { 00:16:31.918 "trtype": "RDMA", 00:16:31.918 "adrfam": "IPv4", 00:16:31.918 "traddr": "192.168.100.8", 00:16:31.918 "trsvcid": "4420" 00:16:31.918 }, 00:16:31.918 "peer_address": { 00:16:31.918 "trtype": "RDMA", 00:16:31.918 "adrfam": "IPv4", 00:16:31.918 "traddr": "192.168.100.8", 00:16:31.918 "trsvcid": "56693" 00:16:31.918 }, 00:16:31.918 "auth": { 00:16:31.918 "state": "completed", 00:16:31.918 "digest": "sha512", 00:16:31.918 "dhgroup": "ffdhe3072" 00:16:31.918 } 00:16:31.918 } 00:16:31.918 ]' 00:16:31.918 14:51:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:31.918 14:51:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:31.918 14:51:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:31.918 14:51:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:31.918 14:51:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:31.918 14:51:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.918 14:51:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.918 14:51:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.175 14:51:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:ODA4MmE1ZGViNGE2YThkNGE1N2UwNTY3MTdkMTQ5NWMyYWRiZTlkZmM1YzY4MmVmNmFkODUzOTNjMDVkMTcyZjoJbsE=: 00:16:32.737 14:51:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.993 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.993 14:51:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:32.993 14:51:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.993 14:51:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.993 14:51:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.993 14:51:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:32.993 14:51:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:32.993 14:51:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:32.993 14:51:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:32.993 14:51:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:16:32.993 14:51:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:32.993 14:51:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:32.993 14:51:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:32.993 14:51:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:32.993 14:51:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:32.993 14:51:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:32.993 14:51:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.993 14:51:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.250 14:51:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.250 14:51:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.250 14:51:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.250 00:16:33.506 14:51:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:33.506 14:51:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:33.506 14:51:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.506 14:51:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.506 14:51:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:33.506 14:51:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.506 14:51:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.506 14:51:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.506 14:51:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:33.506 { 00:16:33.506 "cntlid": 121, 00:16:33.506 "qid": 0, 00:16:33.506 "state": "enabled", 00:16:33.506 "thread": "nvmf_tgt_poll_group_000", 00:16:33.506 "listen_address": { 00:16:33.506 "trtype": "RDMA", 00:16:33.506 "adrfam": "IPv4", 00:16:33.506 "traddr": "192.168.100.8", 00:16:33.506 "trsvcid": "4420" 00:16:33.506 }, 00:16:33.506 "peer_address": { 00:16:33.506 "trtype": "RDMA", 00:16:33.506 "adrfam": "IPv4", 00:16:33.506 "traddr": "192.168.100.8", 00:16:33.506 "trsvcid": "57167" 00:16:33.506 }, 00:16:33.506 "auth": { 00:16:33.507 "state": "completed", 00:16:33.507 "digest": "sha512", 00:16:33.507 "dhgroup": "ffdhe4096" 00:16:33.507 } 00:16:33.507 } 00:16:33.507 ]' 00:16:33.507 14:51:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:33.507 14:51:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:33.507 14:51:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:33.763 14:51:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:33.763 14:51:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:33.763 14:51:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.763 14:51:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.764 14:51:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.021 14:51:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:YThkOTBiZjczYmU0MDA3NjE3N2Q1MDFiODkwOWEzNGY3Mjc1YjNiNzEwM2JhODRm77kZEg==: --dhchap-ctrl-secret DHHC-1:03:NjM5MGE3MDc5MDk5MjdkZmUyOTc4YTBjZTNkOTJkY2ZkNDRhNTVjYmFkMDkwNmIzZjMxMzdkNDRiMmU1OTg0M5gPKx4=: 00:16:34.584 14:51:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:34.584 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:34.584 14:51:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:34.584 14:51:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.584 14:51:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.584 14:51:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.584 14:51:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:34.584 14:51:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:34.584 14:51:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:34.867 14:51:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:16:34.867 14:51:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:34.867 14:51:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:34.867 14:51:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:34.867 14:51:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:34.867 14:51:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:34.867 14:51:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.867 14:51:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.867 14:51:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.867 14:51:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.868 14:51:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.868 14:51:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:35.124 00:16:35.124 14:51:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:35.124 14:51:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:35.124 14:51:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:35.381 14:51:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.381 14:51:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:35.381 14:51:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.381 14:51:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.381 14:51:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.381 14:51:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:35.381 { 00:16:35.381 "cntlid": 123, 00:16:35.381 "qid": 0, 00:16:35.381 "state": "enabled", 00:16:35.381 "thread": "nvmf_tgt_poll_group_000", 00:16:35.381 "listen_address": { 00:16:35.381 "trtype": "RDMA", 00:16:35.381 "adrfam": "IPv4", 00:16:35.381 "traddr": "192.168.100.8", 00:16:35.381 "trsvcid": "4420" 00:16:35.381 }, 00:16:35.381 "peer_address": { 00:16:35.381 "trtype": "RDMA", 00:16:35.381 "adrfam": "IPv4", 00:16:35.381 "traddr": "192.168.100.8", 00:16:35.381 "trsvcid": "36631" 00:16:35.381 }, 00:16:35.381 "auth": { 00:16:35.381 "state": "completed", 00:16:35.381 "digest": "sha512", 00:16:35.381 "dhgroup": "ffdhe4096" 00:16:35.381 } 00:16:35.381 } 00:16:35.381 ]' 00:16:35.381 14:51:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:35.381 14:51:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:35.381 14:51:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:35.381 14:51:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:35.381 14:51:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:35.381 14:51:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:35.381 14:51:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.381 14:51:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.638 14:51:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZGRlZmEyMzZhNWI2NTdhNDc3ZWVlNjYyYjE3MzhlODLmSnbZ: --dhchap-ctrl-secret DHHC-1:02:ZjkxZDM1NmU0ZTg1ZWYxY2QzOTE0Y2ZjOGVkOTY2MmI4NDU0ZjIzMzFlNTBlNTFiwS9w/Q==: 00:16:36.200 14:51:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:36.457 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:36.457 14:51:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:36.457 14:51:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.457 14:51:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.457 14:51:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.457 14:51:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:36.457 14:51:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:36.457 14:51:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:36.457 14:51:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:16:36.457 14:51:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:36.457 14:51:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:36.457 14:51:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:36.457 14:51:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:36.457 14:51:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:36.457 14:51:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.457 14:51:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.457 14:51:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.457 14:51:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.457 14:51:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.457 14:51:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.714 00:16:36.714 14:51:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:36.714 14:51:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:36.714 14:51:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.972 14:51:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.972 14:51:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.972 14:51:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.972 14:51:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.972 14:51:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.972 14:51:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:36.972 { 00:16:36.972 "cntlid": 125, 00:16:36.972 "qid": 0, 00:16:36.972 "state": "enabled", 00:16:36.972 "thread": "nvmf_tgt_poll_group_000", 00:16:36.972 "listen_address": { 00:16:36.972 "trtype": "RDMA", 00:16:36.972 "adrfam": "IPv4", 00:16:36.972 "traddr": "192.168.100.8", 00:16:36.972 "trsvcid": "4420" 00:16:36.972 }, 00:16:36.972 "peer_address": { 00:16:36.972 "trtype": "RDMA", 00:16:36.972 "adrfam": "IPv4", 00:16:36.972 "traddr": "192.168.100.8", 00:16:36.972 "trsvcid": "42638" 00:16:36.972 }, 00:16:36.972 "auth": { 00:16:36.972 "state": "completed", 00:16:36.972 "digest": "sha512", 00:16:36.972 "dhgroup": "ffdhe4096" 00:16:36.972 } 00:16:36.972 } 00:16:36.972 ]' 00:16:36.972 14:51:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:36.972 14:51:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:36.972 14:51:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:36.972 14:51:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:36.972 14:51:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:36.972 14:51:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.972 14:51:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.972 14:51:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.229 14:51:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:OWU1MjY5MzVmZTRkZGQyYzJjM2VjYTJkZmE1MjhiMTY1MTdmMWVkOWIwZjg1ODg0HgnKFQ==: --dhchap-ctrl-secret DHHC-1:01:ODZmMjBhOTQ4OTQzOTU0YjRjYzkxMTVlZDkzNjNiMDh6+1KJ: 00:16:37.794 14:51:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.051 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.051 14:51:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:38.051 14:51:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.051 14:51:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.051 14:51:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.051 14:51:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:38.051 14:51:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:38.051 14:51:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:38.051 14:51:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:16:38.051 14:51:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:38.051 14:51:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:38.051 14:51:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:38.051 14:51:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:38.051 14:51:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.051 14:51:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:16:38.051 14:51:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.051 14:51:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.308 14:51:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.308 14:51:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:38.308 14:51:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:38.565 00:16:38.565 14:51:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:38.565 14:51:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.565 14:51:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:38.565 14:51:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.565 14:51:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.565 14:51:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.565 14:51:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.565 14:51:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.565 14:51:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:38.565 { 00:16:38.565 "cntlid": 127, 00:16:38.565 "qid": 0, 00:16:38.565 "state": "enabled", 00:16:38.565 "thread": "nvmf_tgt_poll_group_000", 00:16:38.565 "listen_address": { 00:16:38.565 "trtype": "RDMA", 00:16:38.565 "adrfam": "IPv4", 00:16:38.565 "traddr": "192.168.100.8", 00:16:38.565 "trsvcid": "4420" 00:16:38.565 }, 00:16:38.565 "peer_address": { 00:16:38.565 "trtype": "RDMA", 00:16:38.565 "adrfam": "IPv4", 00:16:38.565 "traddr": "192.168.100.8", 00:16:38.565 "trsvcid": "48081" 00:16:38.565 }, 00:16:38.565 "auth": { 00:16:38.565 "state": "completed", 00:16:38.565 "digest": "sha512", 00:16:38.565 "dhgroup": "ffdhe4096" 00:16:38.565 } 00:16:38.565 } 00:16:38.565 ]' 00:16:38.565 14:51:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:38.565 14:51:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:38.565 14:51:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:38.822 14:51:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:38.822 14:51:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:38.822 14:51:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.822 14:51:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.822 14:51:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.079 14:51:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:ODA4MmE1ZGViNGE2YThkNGE1N2UwNTY3MTdkMTQ5NWMyYWRiZTlkZmM1YzY4MmVmNmFkODUzOTNjMDVkMTcyZjoJbsE=: 00:16:39.644 14:51:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.644 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.644 14:51:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:39.644 14:51:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.644 14:51:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.644 14:51:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.644 14:51:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:39.644 14:51:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:39.644 14:51:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:39.644 14:51:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:39.901 14:51:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:16:39.901 14:51:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:39.901 14:51:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:39.901 14:51:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:39.901 14:51:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:39.901 14:51:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:39.902 14:51:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.902 14:51:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.902 14:51:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.902 14:51:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.902 14:51:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.902 14:51:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.159 00:16:40.159 14:51:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:40.159 14:51:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:40.159 14:51:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.417 14:51:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.417 14:51:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.417 14:51:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.417 14:51:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.417 14:51:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.417 14:51:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:40.417 { 00:16:40.417 "cntlid": 129, 00:16:40.417 "qid": 0, 00:16:40.417 "state": "enabled", 00:16:40.417 "thread": "nvmf_tgt_poll_group_000", 00:16:40.417 "listen_address": { 00:16:40.417 "trtype": "RDMA", 00:16:40.417 "adrfam": "IPv4", 00:16:40.417 "traddr": "192.168.100.8", 00:16:40.417 "trsvcid": "4420" 00:16:40.417 }, 00:16:40.417 "peer_address": { 00:16:40.417 "trtype": "RDMA", 00:16:40.417 "adrfam": "IPv4", 00:16:40.417 "traddr": "192.168.100.8", 00:16:40.417 "trsvcid": "37005" 00:16:40.417 }, 00:16:40.417 "auth": { 00:16:40.417 "state": "completed", 00:16:40.417 "digest": "sha512", 00:16:40.417 "dhgroup": "ffdhe6144" 00:16:40.417 } 00:16:40.417 } 00:16:40.417 ]' 00:16:40.417 14:51:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:40.417 14:51:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:40.417 14:51:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:40.417 14:51:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:40.417 14:51:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:40.417 14:51:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.417 14:51:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.417 14:51:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.675 14:51:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:YThkOTBiZjczYmU0MDA3NjE3N2Q1MDFiODkwOWEzNGY3Mjc1YjNiNzEwM2JhODRm77kZEg==: --dhchap-ctrl-secret DHHC-1:03:NjM5MGE3MDc5MDk5MjdkZmUyOTc4YTBjZTNkOTJkY2ZkNDRhNTVjYmFkMDkwNmIzZjMxMzdkNDRiMmU1OTg0M5gPKx4=: 00:16:41.240 14:51:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.499 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.499 14:51:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:41.499 14:51:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.499 14:51:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.499 14:51:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.499 14:51:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:41.499 14:51:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:41.499 14:51:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:41.499 14:51:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:16:41.499 14:51:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:41.499 14:51:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:41.499 14:51:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:41.499 14:51:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:41.499 14:51:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.499 14:51:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.499 14:51:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.499 14:51:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.499 14:51:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.499 14:51:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.499 14:51:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.065 00:16:42.065 14:51:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:42.065 14:51:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:42.065 14:51:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.065 14:51:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.065 14:51:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.065 14:51:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.065 14:51:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.065 14:51:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.065 14:51:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:42.065 { 00:16:42.065 "cntlid": 131, 00:16:42.065 "qid": 0, 00:16:42.065 "state": "enabled", 00:16:42.065 "thread": "nvmf_tgt_poll_group_000", 00:16:42.065 "listen_address": { 00:16:42.065 "trtype": "RDMA", 00:16:42.065 "adrfam": "IPv4", 00:16:42.065 "traddr": "192.168.100.8", 00:16:42.065 "trsvcid": "4420" 00:16:42.065 }, 00:16:42.065 "peer_address": { 00:16:42.065 "trtype": "RDMA", 00:16:42.065 "adrfam": "IPv4", 00:16:42.065 "traddr": "192.168.100.8", 00:16:42.065 "trsvcid": "43748" 00:16:42.065 }, 00:16:42.065 "auth": { 00:16:42.065 "state": "completed", 00:16:42.065 "digest": "sha512", 00:16:42.065 "dhgroup": "ffdhe6144" 00:16:42.065 } 00:16:42.065 } 00:16:42.065 ]' 00:16:42.065 14:51:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:42.065 14:51:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:42.065 14:51:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:42.323 14:51:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:42.323 14:51:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:42.323 14:51:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.323 14:51:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.323 14:51:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.323 14:51:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZGRlZmEyMzZhNWI2NTdhNDc3ZWVlNjYyYjE3MzhlODLmSnbZ: --dhchap-ctrl-secret DHHC-1:02:ZjkxZDM1NmU0ZTg1ZWYxY2QzOTE0Y2ZjOGVkOTY2MmI4NDU0ZjIzMzFlNTBlNTFiwS9w/Q==: 00:16:43.257 14:51:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.257 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.257 14:51:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:43.257 14:51:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.257 14:51:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.257 14:51:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.257 14:51:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:43.257 14:51:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:43.258 14:51:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:43.258 14:51:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:16:43.258 14:51:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:43.258 14:51:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:43.258 14:51:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:43.258 14:51:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:43.258 14:51:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.258 14:51:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.258 14:51:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.258 14:51:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.258 14:51:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.258 14:51:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.258 14:51:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.823 00:16:43.823 14:51:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:43.823 14:51:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:43.823 14:51:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.823 14:51:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.823 14:51:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.823 14:51:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.823 14:51:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.823 14:51:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.823 14:51:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:43.823 { 00:16:43.823 "cntlid": 133, 00:16:43.823 "qid": 0, 00:16:43.823 "state": "enabled", 00:16:43.823 "thread": "nvmf_tgt_poll_group_000", 00:16:43.823 "listen_address": { 00:16:43.823 "trtype": "RDMA", 00:16:43.823 "adrfam": "IPv4", 00:16:43.823 "traddr": "192.168.100.8", 00:16:43.823 "trsvcid": "4420" 00:16:43.823 }, 00:16:43.823 "peer_address": { 00:16:43.823 "trtype": "RDMA", 00:16:43.823 "adrfam": "IPv4", 00:16:43.823 "traddr": "192.168.100.8", 00:16:43.823 "trsvcid": "59741" 00:16:43.823 }, 00:16:43.823 "auth": { 00:16:43.823 "state": "completed", 00:16:43.823 "digest": "sha512", 00:16:43.823 "dhgroup": "ffdhe6144" 00:16:43.823 } 00:16:43.823 } 00:16:43.823 ]' 00:16:43.823 14:51:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:43.823 14:51:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:43.823 14:51:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:44.080 14:51:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:44.080 14:51:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:44.080 14:51:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.080 14:51:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.080 14:51:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.080 14:51:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:OWU1MjY5MzVmZTRkZGQyYzJjM2VjYTJkZmE1MjhiMTY1MTdmMWVkOWIwZjg1ODg0HgnKFQ==: --dhchap-ctrl-secret DHHC-1:01:ODZmMjBhOTQ4OTQzOTU0YjRjYzkxMTVlZDkzNjNiMDh6+1KJ: 00:16:45.014 14:51:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.014 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.014 14:51:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:45.014 14:51:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.014 14:51:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.014 14:51:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.014 14:51:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:45.014 14:51:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:45.014 14:51:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:45.014 14:51:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:16:45.014 14:51:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:45.014 14:51:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:45.014 14:51:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:45.014 14:51:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:45.014 14:51:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.014 14:51:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:16:45.014 14:51:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.014 14:51:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.014 14:51:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.014 14:51:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:45.014 14:51:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:45.580 00:16:45.580 14:51:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:45.580 14:51:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:45.580 14:51:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.580 14:51:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.580 14:51:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.580 14:51:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.580 14:51:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.580 14:51:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.580 14:51:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:45.580 { 00:16:45.580 "cntlid": 135, 00:16:45.580 "qid": 0, 00:16:45.580 "state": "enabled", 00:16:45.580 "thread": "nvmf_tgt_poll_group_000", 00:16:45.580 "listen_address": { 00:16:45.580 "trtype": "RDMA", 00:16:45.580 "adrfam": "IPv4", 00:16:45.580 "traddr": "192.168.100.8", 00:16:45.580 "trsvcid": "4420" 00:16:45.580 }, 00:16:45.580 "peer_address": { 00:16:45.580 "trtype": "RDMA", 00:16:45.580 "adrfam": "IPv4", 00:16:45.580 "traddr": "192.168.100.8", 00:16:45.580 "trsvcid": "60792" 00:16:45.580 }, 00:16:45.580 "auth": { 00:16:45.580 "state": "completed", 00:16:45.580 "digest": "sha512", 00:16:45.580 "dhgroup": "ffdhe6144" 00:16:45.580 } 00:16:45.580 } 00:16:45.580 ]' 00:16:45.580 14:51:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:45.580 14:51:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:45.580 14:51:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:45.837 14:51:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:45.837 14:51:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:45.837 14:51:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.837 14:51:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.837 14:51:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.837 14:51:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:ODA4MmE1ZGViNGE2YThkNGE1N2UwNTY3MTdkMTQ5NWMyYWRiZTlkZmM1YzY4MmVmNmFkODUzOTNjMDVkMTcyZjoJbsE=: 00:16:46.510 14:51:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.794 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.794 14:51:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:46.794 14:51:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.794 14:51:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.794 14:51:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.794 14:51:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:46.794 14:51:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:46.794 14:51:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:46.794 14:51:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:46.794 14:51:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:16:46.794 14:51:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:46.794 14:51:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:46.794 14:51:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:46.794 14:51:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:46.794 14:51:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.794 14:51:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.794 14:51:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.794 14:51:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.794 14:51:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.794 14:51:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.794 14:51:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.359 00:16:47.359 14:51:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:47.359 14:51:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:47.359 14:51:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.617 14:51:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.617 14:51:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.617 14:51:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.617 14:51:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.617 14:51:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.617 14:51:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:47.617 { 00:16:47.617 "cntlid": 137, 00:16:47.617 "qid": 0, 00:16:47.617 "state": "enabled", 00:16:47.617 "thread": "nvmf_tgt_poll_group_000", 00:16:47.617 "listen_address": { 00:16:47.617 "trtype": "RDMA", 00:16:47.617 "adrfam": "IPv4", 00:16:47.617 "traddr": "192.168.100.8", 00:16:47.617 "trsvcid": "4420" 00:16:47.617 }, 00:16:47.617 "peer_address": { 00:16:47.617 "trtype": "RDMA", 00:16:47.617 "adrfam": "IPv4", 00:16:47.617 "traddr": "192.168.100.8", 00:16:47.617 "trsvcid": "52439" 00:16:47.617 }, 00:16:47.617 "auth": { 00:16:47.617 "state": "completed", 00:16:47.617 "digest": "sha512", 00:16:47.617 "dhgroup": "ffdhe8192" 00:16:47.617 } 00:16:47.617 } 00:16:47.617 ]' 00:16:47.617 14:51:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:47.617 14:51:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:47.617 14:51:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:47.617 14:51:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:47.617 14:51:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:47.617 14:51:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.617 14:51:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.617 14:51:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.875 14:51:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:YThkOTBiZjczYmU0MDA3NjE3N2Q1MDFiODkwOWEzNGY3Mjc1YjNiNzEwM2JhODRm77kZEg==: --dhchap-ctrl-secret DHHC-1:03:NjM5MGE3MDc5MDk5MjdkZmUyOTc4YTBjZTNkOTJkY2ZkNDRhNTVjYmFkMDkwNmIzZjMxMzdkNDRiMmU1OTg0M5gPKx4=: 00:16:48.441 14:51:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.441 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.441 14:51:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:48.441 14:51:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.441 14:51:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.441 14:51:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.441 14:51:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:48.441 14:51:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:48.441 14:51:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:48.699 14:51:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:16:48.699 14:51:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:48.699 14:51:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:48.699 14:51:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:48.699 14:51:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:48.699 14:51:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:48.699 14:51:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.699 14:51:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.699 14:51:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.699 14:51:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.699 14:51:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.699 14:51:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.264 00:16:49.264 14:51:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:49.264 14:51:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:49.264 14:51:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.264 14:51:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.264 14:51:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.264 14:51:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.264 14:51:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.520 14:51:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.520 14:51:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:49.520 { 00:16:49.520 "cntlid": 139, 00:16:49.520 "qid": 0, 00:16:49.520 "state": "enabled", 00:16:49.520 "thread": "nvmf_tgt_poll_group_000", 00:16:49.520 "listen_address": { 00:16:49.520 "trtype": "RDMA", 00:16:49.520 "adrfam": "IPv4", 00:16:49.520 "traddr": "192.168.100.8", 00:16:49.520 "trsvcid": "4420" 00:16:49.520 }, 00:16:49.520 "peer_address": { 00:16:49.520 "trtype": "RDMA", 00:16:49.520 "adrfam": "IPv4", 00:16:49.520 "traddr": "192.168.100.8", 00:16:49.520 "trsvcid": "37788" 00:16:49.520 }, 00:16:49.520 "auth": { 00:16:49.520 "state": "completed", 00:16:49.520 "digest": "sha512", 00:16:49.520 "dhgroup": "ffdhe8192" 00:16:49.520 } 00:16:49.520 } 00:16:49.520 ]' 00:16:49.520 14:51:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:49.520 14:51:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:49.520 14:51:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:49.520 14:51:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:49.520 14:51:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:49.520 14:51:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.520 14:51:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.520 14:51:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.777 14:51:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:ZGRlZmEyMzZhNWI2NTdhNDc3ZWVlNjYyYjE3MzhlODLmSnbZ: --dhchap-ctrl-secret DHHC-1:02:ZjkxZDM1NmU0ZTg1ZWYxY2QzOTE0Y2ZjOGVkOTY2MmI4NDU0ZjIzMzFlNTBlNTFiwS9w/Q==: 00:16:50.340 14:51:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.340 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.340 14:51:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:50.340 14:51:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.340 14:51:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.340 14:51:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.340 14:51:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:50.340 14:51:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:50.340 14:51:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:50.597 14:51:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:16:50.597 14:51:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:50.597 14:51:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:50.597 14:51:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:50.597 14:51:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:50.597 14:51:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.597 14:51:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.597 14:51:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.597 14:51:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.597 14:51:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.597 14:51:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.597 14:51:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:51.162 00:16:51.162 14:51:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:51.162 14:51:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:51.162 14:51:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.162 14:51:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.162 14:51:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.162 14:51:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.162 14:51:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.162 14:51:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.162 14:51:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:51.162 { 00:16:51.162 "cntlid": 141, 00:16:51.162 "qid": 0, 00:16:51.162 "state": "enabled", 00:16:51.162 "thread": "nvmf_tgt_poll_group_000", 00:16:51.162 "listen_address": { 00:16:51.162 "trtype": "RDMA", 00:16:51.162 "adrfam": "IPv4", 00:16:51.162 "traddr": "192.168.100.8", 00:16:51.162 "trsvcid": "4420" 00:16:51.162 }, 00:16:51.162 "peer_address": { 00:16:51.162 "trtype": "RDMA", 00:16:51.162 "adrfam": "IPv4", 00:16:51.162 "traddr": "192.168.100.8", 00:16:51.162 "trsvcid": "50823" 00:16:51.162 }, 00:16:51.162 "auth": { 00:16:51.162 "state": "completed", 00:16:51.162 "digest": "sha512", 00:16:51.162 "dhgroup": "ffdhe8192" 00:16:51.162 } 00:16:51.162 } 00:16:51.162 ]' 00:16:51.162 14:51:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:51.420 14:51:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:51.420 14:51:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:51.420 14:51:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:51.420 14:51:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:51.420 14:51:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.420 14:51:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.420 14:51:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.677 14:51:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:OWU1MjY5MzVmZTRkZGQyYzJjM2VjYTJkZmE1MjhiMTY1MTdmMWVkOWIwZjg1ODg0HgnKFQ==: --dhchap-ctrl-secret DHHC-1:01:ODZmMjBhOTQ4OTQzOTU0YjRjYzkxMTVlZDkzNjNiMDh6+1KJ: 00:16:52.243 14:51:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.243 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.243 14:51:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:52.243 14:51:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.243 14:51:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.243 14:51:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.243 14:51:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:52.243 14:51:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:52.243 14:51:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:52.500 14:51:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:16:52.500 14:51:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:52.500 14:51:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:52.500 14:51:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:52.500 14:51:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:52.500 14:51:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.500 14:51:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:16:52.500 14:51:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.500 14:51:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.500 14:51:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.500 14:51:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:52.500 14:51:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:53.065 00:16:53.065 14:51:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:53.065 14:51:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:53.065 14:51:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.065 14:51:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.065 14:51:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.066 14:51:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.066 14:51:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.066 14:51:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.066 14:51:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:53.066 { 00:16:53.066 "cntlid": 143, 00:16:53.066 "qid": 0, 00:16:53.066 "state": "enabled", 00:16:53.066 "thread": "nvmf_tgt_poll_group_000", 00:16:53.066 "listen_address": { 00:16:53.066 "trtype": "RDMA", 00:16:53.066 "adrfam": "IPv4", 00:16:53.066 "traddr": "192.168.100.8", 00:16:53.066 "trsvcid": "4420" 00:16:53.066 }, 00:16:53.066 "peer_address": { 00:16:53.066 "trtype": "RDMA", 00:16:53.066 "adrfam": "IPv4", 00:16:53.066 "traddr": "192.168.100.8", 00:16:53.066 "trsvcid": "50892" 00:16:53.066 }, 00:16:53.066 "auth": { 00:16:53.066 "state": "completed", 00:16:53.066 "digest": "sha512", 00:16:53.066 "dhgroup": "ffdhe8192" 00:16:53.066 } 00:16:53.066 } 00:16:53.066 ]' 00:16:53.066 14:51:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:53.066 14:51:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:53.066 14:51:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:53.323 14:51:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:53.323 14:51:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:53.323 14:51:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.323 14:51:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.323 14:51:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.323 14:51:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:ODA4MmE1ZGViNGE2YThkNGE1N2UwNTY3MTdkMTQ5NWMyYWRiZTlkZmM1YzY4MmVmNmFkODUzOTNjMDVkMTcyZjoJbsE=: 00:16:54.266 14:51:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.266 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.266 14:51:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:54.266 14:51:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.266 14:51:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.266 14:51:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.266 14:51:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:16:54.266 14:51:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:16:54.266 14:51:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:16:54.266 14:51:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:54.266 14:51:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:54.266 14:51:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:54.266 14:51:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:16:54.266 14:51:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:54.266 14:51:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:54.266 14:51:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:54.266 14:51:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:54.266 14:51:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.266 14:51:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:54.266 14:51:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.266 14:51:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.266 14:51:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.266 14:51:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:54.266 14:51:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:54.831 00:16:54.831 14:51:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:54.831 14:51:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:54.831 14:51:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.089 14:51:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.089 14:51:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.089 14:51:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.089 14:51:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.089 14:51:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.089 14:51:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:55.089 { 00:16:55.089 "cntlid": 145, 00:16:55.089 "qid": 0, 00:16:55.089 "state": "enabled", 00:16:55.089 "thread": "nvmf_tgt_poll_group_000", 00:16:55.089 "listen_address": { 00:16:55.089 "trtype": "RDMA", 00:16:55.089 "adrfam": "IPv4", 00:16:55.089 "traddr": "192.168.100.8", 00:16:55.089 "trsvcid": "4420" 00:16:55.089 }, 00:16:55.089 "peer_address": { 00:16:55.089 "trtype": "RDMA", 00:16:55.089 "adrfam": "IPv4", 00:16:55.089 "traddr": "192.168.100.8", 00:16:55.089 "trsvcid": "45055" 00:16:55.089 }, 00:16:55.089 "auth": { 00:16:55.089 "state": "completed", 00:16:55.089 "digest": "sha512", 00:16:55.089 "dhgroup": "ffdhe8192" 00:16:55.089 } 00:16:55.089 } 00:16:55.089 ]' 00:16:55.089 14:51:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:55.089 14:51:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:55.089 14:51:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:55.089 14:51:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:55.089 14:51:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:55.089 14:51:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.089 14:51:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.089 14:51:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.346 14:51:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:YThkOTBiZjczYmU0MDA3NjE3N2Q1MDFiODkwOWEzNGY3Mjc1YjNiNzEwM2JhODRm77kZEg==: --dhchap-ctrl-secret DHHC-1:03:NjM5MGE3MDc5MDk5MjdkZmUyOTc4YTBjZTNkOTJkY2ZkNDRhNTVjYmFkMDkwNmIzZjMxMzdkNDRiMmU1OTg0M5gPKx4=: 00:16:55.911 14:51:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.911 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.911 14:51:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:55.911 14:51:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.911 14:51:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.167 14:51:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.167 14:51:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 00:16:56.167 14:51:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.167 14:51:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.167 14:51:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.167 14:51:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:56.167 14:51:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:16:56.167 14:51:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:56.167 14:51:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:16:56.167 14:51:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:56.167 14:51:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:16:56.167 14:51:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:56.167 14:51:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:56.167 14:51:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:28.218 request: 00:17:28.218 { 00:17:28.218 "name": "nvme0", 00:17:28.218 "trtype": "rdma", 00:17:28.218 "traddr": "192.168.100.8", 00:17:28.218 "adrfam": "ipv4", 00:17:28.218 "trsvcid": "4420", 00:17:28.218 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:28.218 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562", 00:17:28.218 "prchk_reftag": false, 00:17:28.218 "prchk_guard": false, 00:17:28.218 "hdgst": false, 00:17:28.218 "ddgst": false, 00:17:28.218 "dhchap_key": "key2", 00:17:28.218 "method": "bdev_nvme_attach_controller", 00:17:28.218 "req_id": 1 00:17:28.218 } 00:17:28.218 Got JSON-RPC error response 00:17:28.218 response: 00:17:28.218 { 00:17:28.218 "code": -5, 00:17:28.218 "message": "Input/output error" 00:17:28.218 } 00:17:28.218 14:52:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:28.218 14:52:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:28.218 14:52:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:28.218 14:52:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:28.218 14:52:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:17:28.218 14:52:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.218 14:52:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.218 14:52:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.218 14:52:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:28.218 14:52:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.218 14:52:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.218 14:52:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.218 14:52:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:28.218 14:52:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:28.218 14:52:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:28.218 14:52:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:28.218 14:52:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:28.218 14:52:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:28.218 14:52:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:28.218 14:52:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:28.218 14:52:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:28.218 request: 00:17:28.218 { 00:17:28.218 "name": "nvme0", 00:17:28.218 "trtype": "rdma", 00:17:28.218 "traddr": "192.168.100.8", 00:17:28.218 "adrfam": "ipv4", 00:17:28.218 "trsvcid": "4420", 00:17:28.218 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:28.218 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562", 00:17:28.218 "prchk_reftag": false, 00:17:28.218 "prchk_guard": false, 00:17:28.218 "hdgst": false, 00:17:28.218 "ddgst": false, 00:17:28.218 "dhchap_key": "key1", 00:17:28.218 "dhchap_ctrlr_key": "ckey2", 00:17:28.218 "method": "bdev_nvme_attach_controller", 00:17:28.218 "req_id": 1 00:17:28.218 } 00:17:28.218 Got JSON-RPC error response 00:17:28.218 response: 00:17:28.218 { 00:17:28.218 "code": -5, 00:17:28.218 "message": "Input/output error" 00:17:28.218 } 00:17:28.218 14:52:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:28.218 14:52:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:28.218 14:52:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:28.218 14:52:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:28.218 14:52:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:17:28.218 14:52:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.218 14:52:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.218 14:52:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.218 14:52:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 00:17:28.218 14:52:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.218 14:52:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.218 14:52:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.218 14:52:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:28.218 14:52:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:28.218 14:52:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:28.218 14:52:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:28.218 14:52:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:28.218 14:52:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:28.218 14:52:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:28.218 14:52:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:28.218 14:52:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:00.271 request: 00:18:00.271 { 00:18:00.271 "name": "nvme0", 00:18:00.271 "trtype": "rdma", 00:18:00.271 "traddr": "192.168.100.8", 00:18:00.271 "adrfam": "ipv4", 00:18:00.271 "trsvcid": "4420", 00:18:00.271 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:00.271 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562", 00:18:00.271 "prchk_reftag": false, 00:18:00.271 "prchk_guard": false, 00:18:00.271 "hdgst": false, 00:18:00.271 "ddgst": false, 00:18:00.271 "dhchap_key": "key1", 00:18:00.271 "dhchap_ctrlr_key": "ckey1", 00:18:00.271 "method": "bdev_nvme_attach_controller", 00:18:00.271 "req_id": 1 00:18:00.271 } 00:18:00.271 Got JSON-RPC error response 00:18:00.271 response: 00:18:00.271 { 00:18:00.271 "code": -5, 00:18:00.271 "message": "Input/output error" 00:18:00.271 } 00:18:00.271 14:52:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:00.271 14:52:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:00.271 14:52:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:00.271 14:52:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:00.271 14:52:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:18:00.271 14:52:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.271 14:52:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.271 14:52:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.271 14:52:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 2828973 00:18:00.271 14:52:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 2828973 ']' 00:18:00.271 14:52:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 2828973 00:18:00.271 14:52:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:18:00.271 14:52:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:00.271 14:52:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2828973 00:18:00.271 14:52:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:00.271 14:52:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:00.271 14:52:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2828973' 00:18:00.271 killing process with pid 2828973 00:18:00.271 14:52:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 2828973 00:18:00.271 14:52:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 2828973 00:18:00.271 14:52:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:18:00.271 14:52:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:00.271 14:52:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:00.271 14:52:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.271 14:52:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2861790 00:18:00.271 14:52:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2861790 00:18:00.271 14:52:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2861790 ']' 00:18:00.271 14:52:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:00.271 14:52:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:00.271 14:52:31 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:18:00.271 14:52:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:00.271 14:52:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:00.271 14:52:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.271 14:52:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:00.271 14:52:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:18:00.271 14:52:32 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:00.271 14:52:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:00.271 14:52:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.271 14:52:32 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:00.271 14:52:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:00.271 14:52:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 2861790 00:18:00.271 14:52:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2861790 ']' 00:18:00.271 14:52:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:00.271 14:52:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:00.271 14:52:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:00.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:00.271 14:52:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:00.271 14:52:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.271 14:52:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:00.271 14:52:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:18:00.271 14:52:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:18:00.271 14:52:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.271 14:52:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.271 14:52:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.271 14:52:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:18:00.271 14:52:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:00.271 14:52:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:00.271 14:52:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:00.271 14:52:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:00.271 14:52:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.271 14:52:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:18:00.271 14:52:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.271 14:52:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.271 14:52:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.271 14:52:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:00.271 14:52:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:00.271 00:18:00.271 14:52:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:00.271 14:52:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:00.271 14:52:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.271 14:52:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.271 14:52:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:00.271 14:52:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.271 14:52:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.271 14:52:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.271 14:52:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:00.271 { 00:18:00.271 "cntlid": 1, 00:18:00.271 "qid": 0, 00:18:00.271 "state": "enabled", 00:18:00.271 "thread": "nvmf_tgt_poll_group_000", 00:18:00.271 "listen_address": { 00:18:00.271 "trtype": "RDMA", 00:18:00.271 "adrfam": "IPv4", 00:18:00.271 "traddr": "192.168.100.8", 00:18:00.271 "trsvcid": "4420" 00:18:00.271 }, 00:18:00.271 "peer_address": { 00:18:00.271 "trtype": "RDMA", 00:18:00.271 "adrfam": "IPv4", 00:18:00.271 "traddr": "192.168.100.8", 00:18:00.271 "trsvcid": "46533" 00:18:00.271 }, 00:18:00.271 "auth": { 00:18:00.271 "state": "completed", 00:18:00.271 "digest": "sha512", 00:18:00.271 "dhgroup": "ffdhe8192" 00:18:00.271 } 00:18:00.271 } 00:18:00.271 ]' 00:18:00.271 14:52:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:00.271 14:52:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:00.271 14:52:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:00.272 14:52:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:00.272 14:52:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:00.272 14:52:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.272 14:52:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.272 14:52:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.272 14:52:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:ODA4MmE1ZGViNGE2YThkNGE1N2UwNTY3MTdkMTQ5NWMyYWRiZTlkZmM1YzY4MmVmNmFkODUzOTNjMDVkMTcyZjoJbsE=: 00:18:00.836 14:52:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.836 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.836 14:52:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:18:00.836 14:52:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.836 14:52:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.836 14:52:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.836 14:52:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:18:00.836 14:52:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.836 14:52:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.836 14:52:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.836 14:52:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:18:00.836 14:52:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:18:01.093 14:52:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:01.093 14:52:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:01.093 14:52:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:01.093 14:52:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:01.094 14:52:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:01.094 14:52:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:01.094 14:52:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:01.094 14:52:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:01.094 14:52:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:33.155 request: 00:18:33.155 { 00:18:33.155 "name": "nvme0", 00:18:33.155 "trtype": "rdma", 00:18:33.155 "traddr": "192.168.100.8", 00:18:33.155 "adrfam": "ipv4", 00:18:33.155 "trsvcid": "4420", 00:18:33.155 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:33.155 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562", 00:18:33.155 "prchk_reftag": false, 00:18:33.155 "prchk_guard": false, 00:18:33.155 "hdgst": false, 00:18:33.155 "ddgst": false, 00:18:33.155 "dhchap_key": "key3", 00:18:33.155 "method": "bdev_nvme_attach_controller", 00:18:33.155 "req_id": 1 00:18:33.155 } 00:18:33.155 Got JSON-RPC error response 00:18:33.155 response: 00:18:33.155 { 00:18:33.155 "code": -5, 00:18:33.155 "message": "Input/output error" 00:18:33.155 } 00:18:33.155 14:53:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:33.155 14:53:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:33.155 14:53:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:33.155 14:53:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:33.155 14:53:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:18:33.155 14:53:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:18:33.155 14:53:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:33.155 14:53:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:33.155 14:53:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:33.155 14:53:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:33.155 14:53:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:33.155 14:53:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:33.155 14:53:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:33.155 14:53:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:33.155 14:53:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:33.155 14:53:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:33.155 14:53:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:05.208 request: 00:19:05.208 { 00:19:05.208 "name": "nvme0", 00:19:05.208 "trtype": "rdma", 00:19:05.208 "traddr": "192.168.100.8", 00:19:05.208 "adrfam": "ipv4", 00:19:05.208 "trsvcid": "4420", 00:19:05.208 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:05.208 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562", 00:19:05.208 "prchk_reftag": false, 00:19:05.208 "prchk_guard": false, 00:19:05.208 "hdgst": false, 00:19:05.208 "ddgst": false, 00:19:05.208 "dhchap_key": "key3", 00:19:05.208 "method": "bdev_nvme_attach_controller", 00:19:05.208 "req_id": 1 00:19:05.208 } 00:19:05.208 Got JSON-RPC error response 00:19:05.208 response: 00:19:05.208 { 00:19:05.208 "code": -5, 00:19:05.208 "message": "Input/output error" 00:19:05.208 } 00:19:05.208 14:53:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:05.208 14:53:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:05.208 14:53:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:05.208 14:53:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:05.208 14:53:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:19:05.208 14:53:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:19:05.208 14:53:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:19:05.208 14:53:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:05.208 14:53:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:05.208 14:53:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:05.208 14:53:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:19:05.208 14:53:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.208 14:53:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.208 14:53:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.208 14:53:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:19:05.208 14:53:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.208 14:53:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.208 14:53:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.208 14:53:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:05.208 14:53:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:05.208 14:53:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:05.208 14:53:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:05.208 14:53:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:05.208 14:53:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:05.208 14:53:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:05.208 14:53:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:05.208 14:53:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:05.208 request: 00:19:05.208 { 00:19:05.208 "name": "nvme0", 00:19:05.208 "trtype": "rdma", 00:19:05.208 "traddr": "192.168.100.8", 00:19:05.208 "adrfam": "ipv4", 00:19:05.208 "trsvcid": "4420", 00:19:05.208 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:05.208 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562", 00:19:05.208 "prchk_reftag": false, 00:19:05.208 "prchk_guard": false, 00:19:05.208 "hdgst": false, 00:19:05.208 "ddgst": false, 00:19:05.208 "dhchap_key": "key0", 00:19:05.208 "dhchap_ctrlr_key": "key1", 00:19:05.208 "method": "bdev_nvme_attach_controller", 00:19:05.208 "req_id": 1 00:19:05.208 } 00:19:05.208 Got JSON-RPC error response 00:19:05.208 response: 00:19:05.208 { 00:19:05.208 "code": -5, 00:19:05.208 "message": "Input/output error" 00:19:05.208 } 00:19:05.208 14:53:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:05.208 14:53:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:05.208 14:53:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:05.208 14:53:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:05.208 14:53:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:05.208 14:53:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:05.208 00:19:05.208 14:53:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:19:05.208 14:53:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:19:05.208 14:53:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:05.208 14:53:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.208 14:53:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:05.208 14:53:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:05.208 14:53:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:19:05.208 14:53:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:19:05.208 14:53:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2829220 00:19:05.208 14:53:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 2829220 ']' 00:19:05.208 14:53:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 2829220 00:19:05.208 14:53:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:19:05.208 14:53:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:05.208 14:53:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2829220 00:19:05.208 14:53:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:05.208 14:53:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:05.208 14:53:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2829220' 00:19:05.208 killing process with pid 2829220 00:19:05.208 14:53:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 2829220 00:19:05.208 14:53:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 2829220 00:19:05.208 14:53:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:19:05.208 14:53:36 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:05.208 14:53:36 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:19:05.208 14:53:36 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:19:05.208 14:53:36 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:19:05.208 14:53:36 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:19:05.208 14:53:36 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:05.208 14:53:36 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:19:05.208 rmmod nvme_rdma 00:19:05.208 rmmod nvme_fabrics 00:19:05.208 14:53:36 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:05.208 14:53:36 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:19:05.208 14:53:36 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:19:05.208 14:53:36 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 2861790 ']' 00:19:05.208 14:53:37 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 2861790 00:19:05.208 14:53:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 2861790 ']' 00:19:05.208 14:53:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 2861790 00:19:05.208 14:53:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:19:05.208 14:53:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:05.208 14:53:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2861790 00:19:05.208 14:53:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:05.208 14:53:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:05.208 14:53:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2861790' 00:19:05.209 killing process with pid 2861790 00:19:05.209 14:53:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 2861790 00:19:05.209 14:53:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 2861790 00:19:05.209 14:53:37 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:05.209 14:53:37 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:19:05.209 14:53:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.6EH /tmp/spdk.key-sha256.jcV /tmp/spdk.key-sha384.vky /tmp/spdk.key-sha512.v8r /tmp/spdk.key-sha512.lPa /tmp/spdk.key-sha384.nna /tmp/spdk.key-sha256.LBM '' /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf-auth.log 00:19:05.209 00:19:05.209 real 4m20.731s 00:19:05.209 user 9m23.340s 00:19:05.209 sys 0m18.804s 00:19:05.209 14:53:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:05.209 14:53:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.209 ************************************ 00:19:05.209 END TEST nvmf_auth_target 00:19:05.209 ************************************ 00:19:05.209 14:53:37 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:19:05.209 14:53:37 nvmf_rdma -- nvmf/nvmf.sh@59 -- # '[' rdma = tcp ']' 00:19:05.209 14:53:37 nvmf_rdma -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:19:05.209 14:53:37 nvmf_rdma -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:19:05.209 14:53:37 nvmf_rdma -- nvmf/nvmf.sh@72 -- # '[' rdma = tcp ']' 00:19:05.209 14:53:37 nvmf_rdma -- nvmf/nvmf.sh@78 -- # [[ rdma == \r\d\m\a ]] 00:19:05.209 14:53:37 nvmf_rdma -- nvmf/nvmf.sh@81 -- # run_test nvmf_srq_overwhelm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:19:05.209 14:53:37 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:05.209 14:53:37 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:05.209 14:53:37 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:19:05.209 ************************************ 00:19:05.209 START TEST nvmf_srq_overwhelm 00:19:05.209 ************************************ 00:19:05.209 14:53:37 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:19:05.209 * Looking for test storage... 00:19:05.209 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:05.209 14:53:37 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:05.209 14:53:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # uname -s 00:19:05.209 14:53:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:05.209 14:53:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:05.209 14:53:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:05.209 14:53:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:05.209 14:53:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:05.209 14:53:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:05.209 14:53:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:05.209 14:53:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:05.209 14:53:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:05.209 14:53:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:05.209 14:53:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:19:05.209 14:53:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:19:05.209 14:53:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:05.209 14:53:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:05.209 14:53:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:05.209 14:53:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:05.209 14:53:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:05.209 14:53:37 nvmf_rdma.nvmf_srq_overwhelm -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:05.209 14:53:37 nvmf_rdma.nvmf_srq_overwhelm -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:05.209 14:53:37 nvmf_rdma.nvmf_srq_overwhelm -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:05.209 14:53:37 nvmf_rdma.nvmf_srq_overwhelm -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:05.209 14:53:37 nvmf_rdma.nvmf_srq_overwhelm -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:05.209 14:53:37 nvmf_rdma.nvmf_srq_overwhelm -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:05.209 14:53:37 nvmf_rdma.nvmf_srq_overwhelm -- paths/export.sh@5 -- # export PATH 00:19:05.209 14:53:37 nvmf_rdma.nvmf_srq_overwhelm -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:05.209 14:53:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@47 -- # : 0 00:19:05.209 14:53:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:05.209 14:53:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:05.209 14:53:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:05.209 14:53:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:05.209 14:53:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:05.209 14:53:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:05.209 14:53:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:05.209 14:53:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:05.209 14:53:37 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:05.209 14:53:37 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:05.209 14:53:37 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@13 -- # NVME_CONNECT='nvme connect -i 16' 00:19:05.209 14:53:37 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@15 -- # nvmftestinit 00:19:05.209 14:53:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:19:05.209 14:53:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:05.209 14:53:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:05.209 14:53:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:05.209 14:53:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:05.209 14:53:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:05.209 14:53:37 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:05.209 14:53:37 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:05.209 14:53:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:05.209 14:53:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:05.209 14:53:37 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@285 -- # xtrace_disable 00:19:05.209 14:53:37 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:08.495 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:08.495 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@291 -- # pci_devs=() 00:19:08.495 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:08.495 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:08.495 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:08.495 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:08.495 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:08.495 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@295 -- # net_devs=() 00:19:08.495 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:08.495 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@296 -- # e810=() 00:19:08.495 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@296 -- # local -ga e810 00:19:08.495 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@297 -- # x722=() 00:19:08.495 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@297 -- # local -ga x722 00:19:08.495 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@298 -- # mlx=() 00:19:08.495 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@298 -- # local -ga mlx 00:19:08.495 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:08.495 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:08.495 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:08.495 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:08.495 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:08.495 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:08.495 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:08.495 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:08.495 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:08.495 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:08.495 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:08.495 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:08.495 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:19:08.495 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:19:08.495 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:19:08.495 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:19:08.495 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:19:08.495 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:08.495 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:08.495 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:19:08.495 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:19:08.495 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:19:08.495 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:19:08.495 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:08.495 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:08.495 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:19:08.495 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:19:08.495 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:08.495 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:19:08.495 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:19:08.495 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:19:08.495 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:19:08.495 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:08.495 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:08.495 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:19:08.495 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:19:08.495 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:08.495 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:19:08.495 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:08.495 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:08.495 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:19:08.495 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:08.495 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:08.495 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:19:08.495 Found net devices under 0000:da:00.0: mlx_0_0 00:19:08.495 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:08.495 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:08.495 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:08.495 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:19:08.495 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:08.495 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:08.495 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:19:08.495 Found net devices under 0000:da:00.1: mlx_0_1 00:19:08.495 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:08.495 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:08.495 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@414 -- # is_hw=yes 00:19:08.495 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:08.495 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:19:08.495 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:19:08.495 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@420 -- # rdma_device_init 00:19:08.495 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:19:08.495 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # uname 00:19:08.495 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:19:08.496 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@62 -- # modprobe ib_cm 00:19:08.496 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@63 -- # modprobe ib_core 00:19:08.496 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@64 -- # modprobe ib_umad 00:19:08.496 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:19:08.496 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@66 -- # modprobe iw_cm 00:19:08.496 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:19:08.496 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:19:08.496 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@502 -- # allocate_nic_ips 00:19:08.496 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:08.496 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@73 -- # get_rdma_if_list 00:19:08.496 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:08.496 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:08.496 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:08.496 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:08.496 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:08.496 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:08.496 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:08.496 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:08.496 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:08.496 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # continue 2 00:19:08.496 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:08.496 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:08.496 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:08.496 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:08.496 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:08.496 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:08.496 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # continue 2 00:19:08.496 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:19:08.496 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:19:08.496 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:08.496 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:08.496 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:08.496 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:08.496 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:19:08.496 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:19:08.496 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:19:08.496 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:08.496 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:19:08.496 altname enp218s0f0np0 00:19:08.496 altname ens818f0np0 00:19:08.496 inet 192.168.100.8/24 scope global mlx_0_0 00:19:08.496 valid_lft forever preferred_lft forever 00:19:08.496 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:19:08.496 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:19:08.496 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:08.496 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:08.496 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:08.496 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:08.496 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:19:08.496 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:19:08.496 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:19:08.496 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:08.496 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:19:08.496 altname enp218s0f1np1 00:19:08.496 altname ens818f1np1 00:19:08.496 inet 192.168.100.9/24 scope global mlx_0_1 00:19:08.496 valid_lft forever preferred_lft forever 00:19:08.496 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@422 -- # return 0 00:19:08.496 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:08.496 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:08.496 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:19:08.496 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:19:08.496 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@86 -- # get_rdma_if_list 00:19:08.496 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:08.496 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:08.496 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:08.496 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:08.496 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:08.496 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:08.496 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:08.496 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:08.496 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:08.496 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # continue 2 00:19:08.496 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:08.496 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:08.496 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:08.496 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:08.496 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:08.496 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:08.496 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # continue 2 00:19:08.496 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:19:08.496 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:19:08.496 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:08.496 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:08.496 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:08.496 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:08.496 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:19:08.496 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:19:08.496 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:08.496 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:08.496 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:08.496 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:08.496 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:19:08.496 192.168.100.9' 00:19:08.496 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:19:08.497 192.168.100.9' 00:19:08.497 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@457 -- # head -n 1 00:19:08.497 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:08.497 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:19:08.497 192.168.100.9' 00:19:08.497 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@458 -- # tail -n +2 00:19:08.497 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@458 -- # head -n 1 00:19:08.497 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:08.497 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:19:08.497 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:08.497 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:19:08.497 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:19:08.497 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:19:08.497 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@17 -- # nvmfappstart -m 0xF 00:19:08.497 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:08.497 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:08.497 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:08.497 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@481 -- # nvmfpid=2875563 00:19:08.497 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@482 -- # waitforlisten 2875563 00:19:08.497 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@829 -- # '[' -z 2875563 ']' 00:19:08.497 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:08.497 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:08.497 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:08.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:08.497 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:08.497 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:08.497 14:53:42 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:08.497 [2024-07-15 14:53:42.390194] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:19:08.497 [2024-07-15 14:53:42.390240] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:08.497 EAL: No free 2048 kB hugepages reported on node 1 00:19:08.755 [2024-07-15 14:53:42.445758] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:08.755 [2024-07-15 14:53:42.527011] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:08.755 [2024-07-15 14:53:42.527045] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:08.755 [2024-07-15 14:53:42.527052] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:08.755 [2024-07-15 14:53:42.527058] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:08.755 [2024-07-15 14:53:42.527063] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:08.755 [2024-07-15 14:53:42.527108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:08.755 [2024-07-15 14:53:42.527206] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:08.755 [2024-07-15 14:53:42.527285] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:08.755 [2024-07-15 14:53:42.527287] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:09.320 14:53:43 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:09.320 14:53:43 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@862 -- # return 0 00:19:09.320 14:53:43 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:09.320 14:53:43 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:09.320 14:53:43 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:09.320 14:53:43 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:09.320 14:53:43 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -s 1024 00:19:09.320 14:53:43 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.320 14:53:43 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:09.578 [2024-07-15 14:53:43.252304] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x20f5cc0/0x20fa1b0) succeed. 00:19:09.578 [2024-07-15 14:53:43.261424] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x20f7300/0x213b840) succeed. 00:19:09.578 14:53:43 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.578 14:53:43 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # seq 0 5 00:19:09.578 14:53:43 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:19:09.578 14:53:43 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000000 00:19:09.578 14:53:43 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.578 14:53:43 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:09.578 14:53:43 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.578 14:53:43 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:09.578 14:53:43 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.578 14:53:43 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:09.578 Malloc0 00:19:09.578 14:53:43 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.578 14:53:43 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:19:09.578 14:53:43 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.578 14:53:43 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:09.578 14:53:43 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.578 14:53:43 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:19:09.578 14:53:43 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.578 14:53:43 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:09.578 [2024-07-15 14:53:43.356341] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:09.578 14:53:43 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.578 14:53:43 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode0 -a 192.168.100.8 -s 4420 00:19:10.626 14:53:44 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme0n1 00:19:10.626 14:53:44 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:19:10.626 14:53:44 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:19:10.626 14:53:44 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:19:10.626 14:53:44 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:19:10.626 14:53:44 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:19:10.626 14:53:44 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:19:10.626 14:53:44 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:19:10.626 14:53:44 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:10.626 14:53:44 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.626 14:53:44 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:10.626 14:53:44 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.626 14:53:44 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:10.626 14:53:44 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.626 14:53:44 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:10.626 Malloc1 00:19:10.626 14:53:44 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.626 14:53:44 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:10.626 14:53:44 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.626 14:53:44 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:10.626 14:53:44 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.626 14:53:44 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:10.626 14:53:44 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.626 14:53:44 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:10.626 14:53:44 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.626 14:53:44 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:19:11.557 14:53:45 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme1n1 00:19:11.557 14:53:45 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:19:11.557 14:53:45 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:19:11.557 14:53:45 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme1n1 00:19:11.557 14:53:45 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:19:11.557 14:53:45 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme1n1 00:19:11.557 14:53:45 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:19:11.557 14:53:45 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:19:11.557 14:53:45 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:19:11.557 14:53:45 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.557 14:53:45 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:11.557 14:53:45 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.557 14:53:45 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:19:11.557 14:53:45 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.557 14:53:45 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:11.557 Malloc2 00:19:11.557 14:53:45 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.557 14:53:45 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:19:11.557 14:53:45 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.557 14:53:45 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:11.557 14:53:45 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.557 14:53:45 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:19:11.557 14:53:45 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.557 14:53:45 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:11.557 14:53:45 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.557 14:53:45 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 192.168.100.8 -s 4420 00:19:12.488 14:53:46 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme2n1 00:19:12.489 14:53:46 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:19:12.489 14:53:46 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:19:12.489 14:53:46 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme2n1 00:19:12.489 14:53:46 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:19:12.489 14:53:46 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme2n1 00:19:12.489 14:53:46 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:19:12.747 14:53:46 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:19:12.747 14:53:46 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:19:12.747 14:53:46 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.747 14:53:46 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:12.747 14:53:46 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.747 14:53:46 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:19:12.747 14:53:46 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.747 14:53:46 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:12.747 Malloc3 00:19:12.747 14:53:46 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.747 14:53:46 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:19:12.747 14:53:46 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.747 14:53:46 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:12.747 14:53:46 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.747 14:53:46 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:19:12.747 14:53:46 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.747 14:53:46 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:12.747 14:53:46 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.747 14:53:46 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 192.168.100.8 -s 4420 00:19:13.681 14:53:47 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme3n1 00:19:13.681 14:53:47 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:19:13.681 14:53:47 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:19:13.681 14:53:47 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme3n1 00:19:13.681 14:53:47 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme3n1 00:19:13.681 14:53:47 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:19:13.681 14:53:47 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:19:13.681 14:53:47 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:19:13.681 14:53:47 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:19:13.681 14:53:47 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.681 14:53:47 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:13.681 14:53:47 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.681 14:53:47 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:19:13.681 14:53:47 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.681 14:53:47 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:13.681 Malloc4 00:19:13.681 14:53:47 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.681 14:53:47 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:19:13.681 14:53:47 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.681 14:53:47 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:13.681 14:53:47 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.682 14:53:47 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:19:13.682 14:53:47 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.682 14:53:47 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:13.682 14:53:47 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.682 14:53:47 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 192.168.100.8 -s 4420 00:19:14.615 14:53:48 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme4n1 00:19:14.615 14:53:48 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:19:14.615 14:53:48 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:19:14.615 14:53:48 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme4n1 00:19:14.615 14:53:48 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:19:14.615 14:53:48 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme4n1 00:19:14.615 14:53:48 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:19:14.615 14:53:48 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:19:14.615 14:53:48 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK00000000000005 00:19:14.615 14:53:48 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.615 14:53:48 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:14.615 14:53:48 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.615 14:53:48 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:19:14.615 14:53:48 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.615 14:53:48 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:14.615 Malloc5 00:19:14.615 14:53:48 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.615 14:53:48 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:19:14.615 14:53:48 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.615 14:53:48 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:14.615 14:53:48 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.615 14:53:48 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 192.168.100.8 -s 4420 00:19:14.615 14:53:48 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.615 14:53:48 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:14.615 14:53:48 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.615 14:53:48 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 192.168.100.8 -s 4420 00:19:15.991 14:53:49 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme5n1 00:19:15.991 14:53:49 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:19:15.991 14:53:49 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:19:15.991 14:53:49 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme5n1 00:19:15.991 14:53:49 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:19:15.991 14:53:49 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme5n1 00:19:15.991 14:53:49 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:19:15.991 14:53:49 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 1048576 -d 128 -t read -r 10 -n 13 00:19:15.991 [global] 00:19:15.991 thread=1 00:19:15.991 invalidate=1 00:19:15.991 rw=read 00:19:15.991 time_based=1 00:19:15.991 runtime=10 00:19:15.991 ioengine=libaio 00:19:15.991 direct=1 00:19:15.991 bs=1048576 00:19:15.991 iodepth=128 00:19:15.991 norandommap=1 00:19:15.991 numjobs=13 00:19:15.991 00:19:15.991 [job0] 00:19:15.991 filename=/dev/nvme0n1 00:19:15.991 [job1] 00:19:15.991 filename=/dev/nvme1n1 00:19:15.991 [job2] 00:19:15.991 filename=/dev/nvme2n1 00:19:15.991 [job3] 00:19:15.991 filename=/dev/nvme3n1 00:19:15.991 [job4] 00:19:15.991 filename=/dev/nvme4n1 00:19:15.991 [job5] 00:19:15.991 filename=/dev/nvme5n1 00:19:15.991 Could not set queue depth (nvme0n1) 00:19:15.991 Could not set queue depth (nvme1n1) 00:19:15.991 Could not set queue depth (nvme2n1) 00:19:15.991 Could not set queue depth (nvme3n1) 00:19:15.991 Could not set queue depth (nvme4n1) 00:19:15.991 Could not set queue depth (nvme5n1) 00:19:15.991 job0: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:19:15.991 ... 00:19:15.991 job1: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:19:15.991 ... 00:19:15.991 job2: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:19:15.991 ... 00:19:15.991 job3: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:19:15.991 ... 00:19:15.991 job4: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:19:15.991 ... 00:19:15.991 job5: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:19:15.991 ... 00:19:15.991 fio-3.35 00:19:15.991 Starting 78 threads 00:19:30.857 00:19:30.857 job0: (groupid=0, jobs=1): err= 0: pid=2877012: Mon Jul 15 14:54:02 2024 00:19:30.857 read: IOPS=77, BW=77.5MiB/s (81.3MB/s)(782MiB/10086msec) 00:19:30.857 slat (usec): min=39, max=1245.5k, avg=12793.07, stdev=63373.97 00:19:30.857 clat (msec): min=77, max=3399, avg=1384.55, stdev=934.19 00:19:30.857 lat (msec): min=138, max=3402, avg=1397.34, stdev=938.27 00:19:30.857 clat percentiles (msec): 00:19:30.857 | 1.00th=[ 300], 5.00th=[ 502], 10.00th=[ 527], 20.00th=[ 575], 00:19:30.857 | 30.00th=[ 676], 40.00th=[ 743], 50.00th=[ 802], 60.00th=[ 1469], 00:19:30.857 | 70.00th=[ 2005], 80.00th=[ 2198], 90.00th=[ 2970], 95.00th=[ 3306], 00:19:30.857 | 99.00th=[ 3373], 99.50th=[ 3373], 99.90th=[ 3406], 99.95th=[ 3406], 00:19:30.857 | 99.99th=[ 3406] 00:19:30.857 bw ( KiB/s): min= 2048, max=251904, per=2.67%, avg=89429.33, stdev=71120.54, samples=15 00:19:30.857 iops : min= 2, max= 246, avg=87.33, stdev=69.45, samples=15 00:19:30.857 lat (msec) : 100=0.13%, 250=0.64%, 500=1.53%, 750=38.87%, 1000=14.32% 00:19:30.857 lat (msec) : 2000=13.43%, >=2000=31.07% 00:19:30.857 cpu : usr=0.08%, sys=1.40%, ctx=1325, majf=0, minf=32769 00:19:30.857 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=2.0%, 32=4.1%, >=64=91.9% 00:19:30.857 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.857 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:19:30.857 issued rwts: total=782,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.857 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:30.857 job0: (groupid=0, jobs=1): err= 0: pid=2877013: Mon Jul 15 14:54:02 2024 00:19:30.857 read: IOPS=21, BW=21.5MiB/s (22.5MB/s)(261MiB/12156msec) 00:19:30.857 slat (usec): min=71, max=2118.2k, avg=38469.55, stdev=207685.51 00:19:30.857 clat (msec): min=1411, max=7099, avg=3890.81, stdev=1820.95 00:19:30.857 lat (msec): min=1434, max=7123, avg=3929.28, stdev=1821.08 00:19:30.857 clat percentiles (msec): 00:19:30.857 | 1.00th=[ 1435], 5.00th=[ 1469], 10.00th=[ 1536], 20.00th=[ 1636], 00:19:30.857 | 30.00th=[ 1787], 40.00th=[ 3742], 50.00th=[ 4245], 60.00th=[ 4530], 00:19:30.857 | 70.00th=[ 4732], 80.00th=[ 5134], 90.00th=[ 6946], 95.00th=[ 7013], 00:19:30.857 | 99.00th=[ 7080], 99.50th=[ 7080], 99.90th=[ 7080], 99.95th=[ 7080], 00:19:30.857 | 99.99th=[ 7080] 00:19:30.857 bw ( KiB/s): min= 1812, max=96256, per=1.36%, avg=45699.33, stdev=35898.60, samples=6 00:19:30.857 iops : min= 1, max= 94, avg=44.50, stdev=35.25, samples=6 00:19:30.857 lat (msec) : 2000=31.03%, >=2000=68.97% 00:19:30.857 cpu : usr=0.00%, sys=0.77%, ctx=658, majf=0, minf=32769 00:19:30.857 IO depths : 1=0.4%, 2=0.8%, 4=1.5%, 8=3.1%, 16=6.1%, 32=12.3%, >=64=75.9% 00:19:30.857 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.857 complete : 0=0.0%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.7% 00:19:30.857 issued rwts: total=261,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.857 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:30.857 job0: (groupid=0, jobs=1): err= 0: pid=2877014: Mon Jul 15 14:54:02 2024 00:19:30.857 read: IOPS=104, BW=104MiB/s (109MB/s)(1049MiB/10086msec) 00:19:30.857 slat (usec): min=31, max=1184.2k, avg=9538.24, stdev=39473.69 00:19:30.857 clat (msec): min=76, max=2950, avg=998.40, stdev=444.09 00:19:30.857 lat (msec): min=87, max=2955, avg=1007.94, stdev=449.50 00:19:30.857 clat percentiles (msec): 00:19:30.858 | 1.00th=[ 117], 5.00th=[ 363], 10.00th=[ 617], 20.00th=[ 667], 00:19:30.858 | 30.00th=[ 709], 40.00th=[ 743], 50.00th=[ 911], 60.00th=[ 1099], 00:19:30.858 | 70.00th=[ 1267], 80.00th=[ 1368], 90.00th=[ 1519], 95.00th=[ 1603], 00:19:30.858 | 99.00th=[ 2869], 99.50th=[ 2903], 99.90th=[ 2937], 99.95th=[ 2937], 00:19:30.858 | 99.99th=[ 2937] 00:19:30.858 bw ( KiB/s): min=26624, max=210944, per=3.76%, avg=125883.73, stdev=51263.55, samples=15 00:19:30.858 iops : min= 26, max= 206, avg=122.93, stdev=50.06, samples=15 00:19:30.858 lat (msec) : 100=0.29%, 250=3.05%, 500=4.19%, 750=32.98%, 1000=14.30% 00:19:30.858 lat (msec) : 2000=43.66%, >=2000=1.53% 00:19:30.858 cpu : usr=0.02%, sys=1.42%, ctx=1618, majf=0, minf=32769 00:19:30.858 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.5%, 32=3.1%, >=64=94.0% 00:19:30.858 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.858 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:30.858 issued rwts: total=1049,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.858 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:30.858 job0: (groupid=0, jobs=1): err= 0: pid=2877015: Mon Jul 15 14:54:02 2024 00:19:30.858 read: IOPS=4, BW=5049KiB/s (5170kB/s)(60.0MiB/12168msec) 00:19:30.858 slat (usec): min=490, max=2128.8k, avg=167619.83, stdev=543913.93 00:19:30.858 clat (msec): min=2109, max=12166, avg=10901.07, stdev=2482.97 00:19:30.858 lat (msec): min=4238, max=12167, avg=11068.69, stdev=2203.15 00:19:30.858 clat percentiles (msec): 00:19:30.858 | 1.00th=[ 2106], 5.00th=[ 4279], 10.00th=[ 6409], 20.00th=[ 8557], 00:19:30.858 | 30.00th=[12013], 40.00th=[12147], 50.00th=[12147], 60.00th=[12147], 00:19:30.858 | 70.00th=[12147], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:19:30.858 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:19:30.858 | 99.99th=[12147] 00:19:30.858 lat (msec) : >=2000=100.00% 00:19:30.858 cpu : usr=0.02%, sys=0.32%, ctx=110, majf=0, minf=15361 00:19:30.858 IO depths : 1=1.7%, 2=3.3%, 4=6.7%, 8=13.3%, 16=26.7%, 32=48.3%, >=64=0.0% 00:19:30.858 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.858 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:30.858 issued rwts: total=60,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.858 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:30.858 job0: (groupid=0, jobs=1): err= 0: pid=2877016: Mon Jul 15 14:54:02 2024 00:19:30.858 read: IOPS=73, BW=74.0MiB/s (77.6MB/s)(751MiB/10151msec) 00:19:30.858 slat (usec): min=38, max=3357.8k, avg=13326.41, stdev=123095.25 00:19:30.858 clat (msec): min=137, max=4857, avg=1038.96, stdev=660.48 00:19:30.858 lat (msec): min=228, max=6294, avg=1052.29, stdev=685.79 00:19:30.858 clat percentiles (msec): 00:19:30.858 | 1.00th=[ 317], 5.00th=[ 609], 10.00th=[ 625], 20.00th=[ 667], 00:19:30.858 | 30.00th=[ 743], 40.00th=[ 768], 50.00th=[ 793], 60.00th=[ 827], 00:19:30.858 | 70.00th=[ 961], 80.00th=[ 1318], 90.00th=[ 1821], 95.00th=[ 2265], 00:19:30.858 | 99.00th=[ 4799], 99.50th=[ 4866], 99.90th=[ 4866], 99.95th=[ 4866], 00:19:30.858 | 99.99th=[ 4866] 00:19:30.858 bw ( KiB/s): min=30720, max=202752, per=3.46%, avg=115958.91, stdev=63718.86, samples=11 00:19:30.858 iops : min= 30, max= 198, avg=113.18, stdev=62.16, samples=11 00:19:30.858 lat (msec) : 250=0.67%, 500=1.20%, 750=32.22%, 1000=37.68%, 2000=20.24% 00:19:30.858 lat (msec) : >=2000=7.99% 00:19:30.858 cpu : usr=0.05%, sys=1.48%, ctx=1081, majf=0, minf=32769 00:19:30.858 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.1%, 32=4.3%, >=64=91.6% 00:19:30.858 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.858 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:19:30.858 issued rwts: total=751,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.858 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:30.858 job0: (groupid=0, jobs=1): err= 0: pid=2877017: Mon Jul 15 14:54:02 2024 00:19:30.858 read: IOPS=3, BW=3314KiB/s (3393kB/s)(39.0MiB/12052msec) 00:19:30.858 slat (usec): min=1916, max=2105.0k, avg=307712.29, stdev=701502.90 00:19:30.858 clat (msec): min=50, max=12033, avg=7168.91, stdev=3226.06 00:19:30.858 lat (msec): min=2086, max=12051, avg=7476.62, stdev=3099.06 00:19:30.858 clat percentiles (msec): 00:19:30.858 | 1.00th=[ 51], 5.00th=[ 2089], 10.00th=[ 2123], 20.00th=[ 4279], 00:19:30.858 | 30.00th=[ 6208], 40.00th=[ 6342], 50.00th=[ 6477], 60.00th=[ 8557], 00:19:30.858 | 70.00th=[ 8557], 80.00th=[10671], 90.00th=[12013], 95.00th=[12013], 00:19:30.858 | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:19:30.858 | 99.99th=[12013] 00:19:30.858 lat (msec) : 100=2.56%, >=2000=97.44% 00:19:30.858 cpu : usr=0.00%, sys=0.19%, ctx=103, majf=0, minf=9985 00:19:30.858 IO depths : 1=2.6%, 2=5.1%, 4=10.3%, 8=20.5%, 16=41.0%, 32=20.5%, >=64=0.0% 00:19:30.858 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.858 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:30.858 issued rwts: total=39,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.858 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:30.858 job0: (groupid=0, jobs=1): err= 0: pid=2877018: Mon Jul 15 14:54:02 2024 00:19:30.858 read: IOPS=41, BW=41.8MiB/s (43.9MB/s)(422MiB/10089msec) 00:19:30.858 slat (usec): min=351, max=2006.9k, avg=23697.15, stdev=115266.34 00:19:30.858 clat (msec): min=86, max=4919, avg=2154.65, stdev=1030.22 00:19:30.858 lat (msec): min=103, max=4923, avg=2178.35, stdev=1036.10 00:19:30.858 clat percentiles (msec): 00:19:30.858 | 1.00th=[ 157], 5.00th=[ 363], 10.00th=[ 852], 20.00th=[ 1452], 00:19:30.858 | 30.00th=[ 1603], 40.00th=[ 1888], 50.00th=[ 2198], 60.00th=[ 2333], 00:19:30.858 | 70.00th=[ 2601], 80.00th=[ 2735], 90.00th=[ 3641], 95.00th=[ 4732], 00:19:30.858 | 99.00th=[ 4866], 99.50th=[ 4866], 99.90th=[ 4933], 99.95th=[ 4933], 00:19:30.858 | 99.99th=[ 4933] 00:19:30.858 bw ( KiB/s): min=22528, max=106496, per=1.64%, avg=54923.64, stdev=22318.84, samples=11 00:19:30.858 iops : min= 22, max= 104, avg=53.64, stdev=21.80, samples=11 00:19:30.858 lat (msec) : 100=0.24%, 250=2.13%, 500=4.74%, 750=2.13%, 1000=1.90% 00:19:30.858 lat (msec) : 2000=31.04%, >=2000=57.82% 00:19:30.858 cpu : usr=0.02%, sys=1.16%, ctx=1302, majf=0, minf=32769 00:19:30.858 IO depths : 1=0.2%, 2=0.5%, 4=0.9%, 8=1.9%, 16=3.8%, 32=7.6%, >=64=85.1% 00:19:30.858 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.858 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:19:30.858 issued rwts: total=422,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.858 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:30.858 job0: (groupid=0, jobs=1): err= 0: pid=2877019: Mon Jul 15 14:54:02 2024 00:19:30.858 read: IOPS=40, BW=40.6MiB/s (42.6MB/s)(408MiB/10037msec) 00:19:30.858 slat (usec): min=77, max=2126.5k, avg=24510.39, stdev=148568.05 00:19:30.858 clat (msec): min=34, max=7503, avg=2997.40, stdev=2116.85 00:19:30.858 lat (msec): min=37, max=7513, avg=3021.91, stdev=2121.44 00:19:30.858 clat percentiles (msec): 00:19:30.858 | 1.00th=[ 79], 5.00th=[ 334], 10.00th=[ 575], 20.00th=[ 1003], 00:19:30.858 | 30.00th=[ 1636], 40.00th=[ 2123], 50.00th=[ 2299], 60.00th=[ 2567], 00:19:30.858 | 70.00th=[ 4178], 80.00th=[ 5738], 90.00th=[ 6409], 95.00th=[ 6544], 00:19:30.858 | 99.00th=[ 6678], 99.50th=[ 6678], 99.90th=[ 7483], 99.95th=[ 7483], 00:19:30.858 | 99.99th=[ 7483] 00:19:30.858 bw ( KiB/s): min= 2048, max=86016, per=1.25%, avg=41813.33, stdev=26594.74, samples=12 00:19:30.858 iops : min= 2, max= 84, avg=40.83, stdev=25.97, samples=12 00:19:30.858 lat (msec) : 50=0.49%, 100=1.72%, 250=1.23%, 500=4.17%, 750=6.37% 00:19:30.858 lat (msec) : 1000=6.13%, 2000=13.97%, >=2000=65.93% 00:19:30.858 cpu : usr=0.04%, sys=1.07%, ctx=988, majf=0, minf=32769 00:19:30.858 IO depths : 1=0.2%, 2=0.5%, 4=1.0%, 8=2.0%, 16=3.9%, 32=7.8%, >=64=84.6% 00:19:30.858 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.858 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:19:30.858 issued rwts: total=408,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.858 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:30.858 job0: (groupid=0, jobs=1): err= 0: pid=2877020: Mon Jul 15 14:54:02 2024 00:19:30.858 read: IOPS=47, BW=47.7MiB/s (50.1MB/s)(481MiB/10075msec) 00:19:30.858 slat (usec): min=63, max=2024.4k, avg=20795.91, stdev=118946.16 00:19:30.858 clat (msec): min=69, max=4114, avg=1734.03, stdev=687.95 00:19:30.858 lat (msec): min=76, max=4231, avg=1754.82, stdev=695.43 00:19:30.858 clat percentiles (msec): 00:19:30.858 | 1.00th=[ 106], 5.00th=[ 259], 10.00th=[ 735], 20.00th=[ 1301], 00:19:30.858 | 30.00th=[ 1452], 40.00th=[ 1754], 50.00th=[ 1955], 60.00th=[ 2056], 00:19:30.858 | 70.00th=[ 2089], 80.00th=[ 2299], 90.00th=[ 2433], 95.00th=[ 2500], 00:19:30.858 | 99.00th=[ 2970], 99.50th=[ 2970], 99.90th=[ 4111], 99.95th=[ 4111], 00:19:30.858 | 99.99th=[ 4111] 00:19:30.858 bw ( KiB/s): min=40960, max=94208, per=2.13%, avg=71452.44, stdev=17330.86, samples=9 00:19:30.858 iops : min= 40, max= 92, avg=69.78, stdev=16.92, samples=9 00:19:30.858 lat (msec) : 100=0.42%, 250=4.37%, 500=3.12%, 750=2.49%, 1000=7.07% 00:19:30.858 lat (msec) : 2000=34.93%, >=2000=47.61% 00:19:30.858 cpu : usr=0.03%, sys=1.17%, ctx=1027, majf=0, minf=32769 00:19:30.858 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.7%, 16=3.3%, 32=6.7%, >=64=86.9% 00:19:30.858 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.858 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:19:30.858 issued rwts: total=481,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.858 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:30.858 job0: (groupid=0, jobs=1): err= 0: pid=2877021: Mon Jul 15 14:54:02 2024 00:19:30.858 read: IOPS=57, BW=58.0MiB/s (60.8MB/s)(586MiB/10110msec) 00:19:30.858 slat (usec): min=435, max=1901.7k, avg=17069.11, stdev=93545.24 00:19:30.858 clat (msec): min=104, max=4061, avg=1945.31, stdev=1071.89 00:19:30.858 lat (msec): min=121, max=4067, avg=1962.37, stdev=1073.06 00:19:30.858 clat percentiles (msec): 00:19:30.858 | 1.00th=[ 186], 5.00th=[ 506], 10.00th=[ 835], 20.00th=[ 927], 00:19:30.858 | 30.00th=[ 1133], 40.00th=[ 1334], 50.00th=[ 1938], 60.00th=[ 2089], 00:19:30.858 | 70.00th=[ 2299], 80.00th=[ 2937], 90.00th=[ 3910], 95.00th=[ 3977], 00:19:30.858 | 99.00th=[ 4044], 99.50th=[ 4044], 99.90th=[ 4077], 99.95th=[ 4077], 00:19:30.858 | 99.99th=[ 4077] 00:19:30.858 bw ( KiB/s): min=28672, max=159744, per=2.16%, avg=72310.15, stdev=39996.21, samples=13 00:19:30.858 iops : min= 28, max= 156, avg=70.62, stdev=39.06, samples=13 00:19:30.858 lat (msec) : 250=1.54%, 500=3.41%, 750=3.24%, 1000=16.55%, 2000=27.82% 00:19:30.858 lat (msec) : >=2000=47.44% 00:19:30.858 cpu : usr=0.03%, sys=1.36%, ctx=1382, majf=0, minf=32769 00:19:30.858 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=2.7%, 32=5.5%, >=64=89.2% 00:19:30.858 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.858 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:19:30.858 issued rwts: total=586,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.858 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:30.858 job0: (groupid=0, jobs=1): err= 0: pid=2877022: Mon Jul 15 14:54:02 2024 00:19:30.858 read: IOPS=40, BW=40.9MiB/s (42.9MB/s)(498MiB/12167msec) 00:19:30.858 slat (usec): min=98, max=2098.5k, avg=20191.19, stdev=109354.66 00:19:30.858 clat (msec): min=1328, max=5813, avg=2431.10, stdev=1161.35 00:19:30.858 lat (msec): min=1338, max=5824, avg=2451.29, stdev=1173.17 00:19:30.858 clat percentiles (msec): 00:19:30.858 | 1.00th=[ 1351], 5.00th=[ 1385], 10.00th=[ 1401], 20.00th=[ 1569], 00:19:30.859 | 30.00th=[ 1620], 40.00th=[ 1670], 50.00th=[ 1854], 60.00th=[ 2198], 00:19:30.859 | 70.00th=[ 2668], 80.00th=[ 3574], 90.00th=[ 4530], 95.00th=[ 4866], 00:19:30.859 | 99.00th=[ 4933], 99.50th=[ 4933], 99.90th=[ 5805], 99.95th=[ 5805], 00:19:30.859 | 99.99th=[ 5805] 00:19:30.859 bw ( KiB/s): min= 1802, max=100352, per=1.89%, avg=63296.83, stdev=26845.90, samples=12 00:19:30.859 iops : min= 1, max= 98, avg=61.75, stdev=26.38, samples=12 00:19:30.859 lat (msec) : 2000=53.41%, >=2000=46.59% 00:19:30.859 cpu : usr=0.02%, sys=1.03%, ctx=1286, majf=0, minf=32769 00:19:30.859 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.2%, 32=6.4%, >=64=87.3% 00:19:30.859 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.859 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:19:30.859 issued rwts: total=498,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.859 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:30.859 job0: (groupid=0, jobs=1): err= 0: pid=2877023: Mon Jul 15 14:54:02 2024 00:19:30.859 read: IOPS=30, BW=30.6MiB/s (32.1MB/s)(311MiB/10148msec) 00:19:30.859 slat (usec): min=52, max=2112.4k, avg=32177.74, stdev=180464.85 00:19:30.859 clat (msec): min=138, max=7124, avg=2540.68, stdev=1902.87 00:19:30.859 lat (msec): min=175, max=7127, avg=2572.85, stdev=1915.39 00:19:30.859 clat percentiles (msec): 00:19:30.859 | 1.00th=[ 194], 5.00th=[ 388], 10.00th=[ 919], 20.00th=[ 1418], 00:19:30.859 | 30.00th=[ 1653], 40.00th=[ 1838], 50.00th=[ 2005], 60.00th=[ 2106], 00:19:30.859 | 70.00th=[ 2400], 80.00th=[ 2567], 90.00th=[ 6879], 95.00th=[ 7080], 00:19:30.859 | 99.00th=[ 7148], 99.50th=[ 7148], 99.90th=[ 7148], 99.95th=[ 7148], 00:19:30.859 | 99.99th=[ 7148] 00:19:30.859 bw ( KiB/s): min=26624, max=92160, per=1.61%, avg=53833.14, stdev=21339.67, samples=7 00:19:30.859 iops : min= 26, max= 90, avg=52.57, stdev=20.84, samples=7 00:19:30.859 lat (msec) : 250=2.25%, 500=3.86%, 750=2.89%, 1000=2.57%, 2000=38.59% 00:19:30.859 lat (msec) : >=2000=49.84% 00:19:30.859 cpu : usr=0.02%, sys=0.87%, ctx=811, majf=0, minf=32769 00:19:30.859 IO depths : 1=0.3%, 2=0.6%, 4=1.3%, 8=2.6%, 16=5.1%, 32=10.3%, >=64=79.7% 00:19:30.859 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.859 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:19:30.859 issued rwts: total=311,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.859 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:30.859 job0: (groupid=0, jobs=1): err= 0: pid=2877024: Mon Jul 15 14:54:02 2024 00:19:30.859 read: IOPS=1, BW=1862KiB/s (1907kB/s)(22.0MiB/12098msec) 00:19:30.859 slat (usec): min=1537, max=2122.3k, avg=455120.26, stdev=833566.55 00:19:30.859 clat (msec): min=2084, max=12088, avg=7400.98, stdev=3404.70 00:19:30.859 lat (msec): min=2099, max=12097, avg=7856.10, stdev=3328.56 00:19:30.859 clat percentiles (msec): 00:19:30.859 | 1.00th=[ 2089], 5.00th=[ 2106], 10.00th=[ 2123], 20.00th=[ 4279], 00:19:30.859 | 30.00th=[ 6409], 40.00th=[ 6409], 50.00th=[ 6409], 60.00th=[ 8557], 00:19:30.859 | 70.00th=[ 8658], 80.00th=[12013], 90.00th=[12147], 95.00th=[12147], 00:19:30.859 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:19:30.859 | 99.99th=[12147] 00:19:30.859 lat (msec) : >=2000=100.00% 00:19:30.859 cpu : usr=0.00%, sys=0.15%, ctx=73, majf=0, minf=5633 00:19:30.859 IO depths : 1=4.5%, 2=9.1%, 4=18.2%, 8=36.4%, 16=31.8%, 32=0.0%, >=64=0.0% 00:19:30.859 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.859 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:19:30.859 issued rwts: total=22,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.859 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:30.859 job1: (groupid=0, jobs=1): err= 0: pid=2877025: Mon Jul 15 14:54:02 2024 00:19:30.859 read: IOPS=2, BW=2194KiB/s (2246kB/s)(26.0MiB/12137msec) 00:19:30.859 slat (usec): min=987, max=2148.9k, avg=385690.54, stdev=792718.14 00:19:30.859 clat (msec): min=2108, max=12135, avg=10167.18, stdev=3267.61 00:19:30.859 lat (msec): min=4239, max=12136, avg=10552.87, stdev=2842.49 00:19:30.859 clat percentiles (msec): 00:19:30.859 | 1.00th=[ 2106], 5.00th=[ 4245], 10.00th=[ 4279], 20.00th=[ 6409], 00:19:30.859 | 30.00th=[10671], 40.00th=[12013], 50.00th=[12013], 60.00th=[12147], 00:19:30.859 | 70.00th=[12147], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:19:30.859 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:19:30.859 | 99.99th=[12147] 00:19:30.859 lat (msec) : >=2000=100.00% 00:19:30.859 cpu : usr=0.00%, sys=0.18%, ctx=90, majf=0, minf=6657 00:19:30.859 IO depths : 1=3.8%, 2=7.7%, 4=15.4%, 8=30.8%, 16=42.3%, 32=0.0%, >=64=0.0% 00:19:30.859 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.859 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:19:30.859 issued rwts: total=26,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.859 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:30.859 job1: (groupid=0, jobs=1): err= 0: pid=2877026: Mon Jul 15 14:54:02 2024 00:19:30.859 read: IOPS=22, BW=22.9MiB/s (24.0MB/s)(277MiB/12078msec) 00:19:30.859 slat (usec): min=71, max=2033.7k, avg=43406.83, stdev=229795.92 00:19:30.859 clat (msec): min=52, max=6549, avg=3989.26, stdev=1025.23 00:19:30.859 lat (msec): min=2081, max=6551, avg=4032.67, stdev=1003.50 00:19:30.859 clat percentiles (msec): 00:19:30.859 | 1.00th=[ 2089], 5.00th=[ 3171], 10.00th=[ 3339], 20.00th=[ 3373], 00:19:30.859 | 30.00th=[ 3440], 40.00th=[ 3473], 50.00th=[ 3641], 60.00th=[ 3977], 00:19:30.859 | 70.00th=[ 4178], 80.00th=[ 4279], 90.00th=[ 5403], 95.00th=[ 6477], 00:19:30.859 | 99.00th=[ 6544], 99.50th=[ 6544], 99.90th=[ 6544], 99.95th=[ 6544], 00:19:30.859 | 99.99th=[ 6544] 00:19:30.859 bw ( KiB/s): min=10199, max=73728, per=1.52%, avg=50851.83, stdev=23054.84, samples=6 00:19:30.859 iops : min= 9, max= 72, avg=49.50, stdev=22.85, samples=6 00:19:30.859 lat (msec) : 100=0.36%, >=2000=99.64% 00:19:30.859 cpu : usr=0.02%, sys=0.71%, ctx=568, majf=0, minf=32769 00:19:30.859 IO depths : 1=0.4%, 2=0.7%, 4=1.4%, 8=2.9%, 16=5.8%, 32=11.6%, >=64=77.3% 00:19:30.859 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.859 complete : 0=0.0%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.7% 00:19:30.859 issued rwts: total=277,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.859 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:30.859 job1: (groupid=0, jobs=1): err= 0: pid=2877027: Mon Jul 15 14:54:02 2024 00:19:30.859 read: IOPS=15, BW=15.3MiB/s (16.1MB/s)(185MiB/12076msec) 00:19:30.859 slat (usec): min=34, max=2122.4k, avg=54480.50, stdev=257797.39 00:19:30.859 clat (msec): min=1613, max=11409, avg=7767.39, stdev=3499.93 00:19:30.859 lat (msec): min=1615, max=11424, avg=7821.87, stdev=3480.70 00:19:30.859 clat percentiles (msec): 00:19:30.859 | 1.00th=[ 1620], 5.00th=[ 1754], 10.00th=[ 2106], 20.00th=[ 3641], 00:19:30.859 | 30.00th=[ 4212], 40.00th=[ 7886], 50.00th=[ 9329], 60.00th=[10402], 00:19:30.859 | 70.00th=[10537], 80.00th=[10805], 90.00th=[11208], 95.00th=[11342], 00:19:30.859 | 99.00th=[11342], 99.50th=[11476], 99.90th=[11476], 99.95th=[11476], 00:19:30.859 | 99.99th=[11476] 00:19:30.859 bw ( KiB/s): min= 1404, max=34816, per=0.44%, avg=14685.63, stdev=13052.77, samples=8 00:19:30.859 iops : min= 1, max= 34, avg=14.25, stdev=12.79, samples=8 00:19:30.859 lat (msec) : 2000=9.73%, >=2000=90.27% 00:19:30.859 cpu : usr=0.00%, sys=0.76%, ctx=551, majf=0, minf=32769 00:19:30.859 IO depths : 1=0.5%, 2=1.1%, 4=2.2%, 8=4.3%, 16=8.6%, 32=17.3%, >=64=65.9% 00:19:30.859 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.859 complete : 0=0.0%, 4=98.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.7% 00:19:30.859 issued rwts: total=185,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.859 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:30.859 job1: (groupid=0, jobs=1): err= 0: pid=2877028: Mon Jul 15 14:54:02 2024 00:19:30.859 read: IOPS=45, BW=45.6MiB/s (47.8MB/s)(554MiB/12149msec) 00:19:30.859 slat (usec): min=30, max=2064.0k, avg=18112.26, stdev=136976.87 00:19:30.859 clat (msec): min=517, max=7713, avg=2475.70, stdev=1865.62 00:19:30.859 lat (msec): min=518, max=7755, avg=2493.82, stdev=1869.09 00:19:30.859 clat percentiles (msec): 00:19:30.859 | 1.00th=[ 523], 5.00th=[ 575], 10.00th=[ 600], 20.00th=[ 676], 00:19:30.859 | 30.00th=[ 751], 40.00th=[ 793], 50.00th=[ 2165], 60.00th=[ 3071], 00:19:30.859 | 70.00th=[ 3540], 80.00th=[ 4799], 90.00th=[ 5403], 95.00th=[ 5537], 00:19:30.859 | 99.00th=[ 5604], 99.50th=[ 5671], 99.90th=[ 7684], 99.95th=[ 7684], 00:19:30.859 | 99.99th=[ 7684] 00:19:30.859 bw ( KiB/s): min= 1812, max=231424, per=2.61%, avg=87426.00, stdev=75925.68, samples=10 00:19:30.859 iops : min= 1, max= 226, avg=85.30, stdev=74.24, samples=10 00:19:30.859 lat (msec) : 750=29.24%, 1000=16.97%, 2000=0.18%, >=2000=53.61% 00:19:30.859 cpu : usr=0.02%, sys=0.95%, ctx=640, majf=0, minf=32769 00:19:30.859 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.4%, 16=2.9%, 32=5.8%, >=64=88.6% 00:19:30.859 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.859 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:19:30.859 issued rwts: total=554,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.859 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:30.859 job1: (groupid=0, jobs=1): err= 0: pid=2877029: Mon Jul 15 14:54:02 2024 00:19:30.859 read: IOPS=1, BW=1940KiB/s (1986kB/s)(23.0MiB/12141msec) 00:19:30.859 slat (usec): min=1013, max=2158.1k, avg=436340.70, stdev=830142.95 00:19:30.859 clat (msec): min=2104, max=12139, avg=10113.10, stdev=3186.61 00:19:30.859 lat (msec): min=4228, max=12140, avg=10549.44, stdev=2688.25 00:19:30.859 clat percentiles (msec): 00:19:30.859 | 1.00th=[ 2106], 5.00th=[ 4245], 10.00th=[ 4279], 20.00th=[ 6477], 00:19:30.859 | 30.00th=[ 8557], 40.00th=[12013], 50.00th=[12013], 60.00th=[12147], 00:19:30.859 | 70.00th=[12147], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:19:30.859 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:19:30.859 | 99.99th=[12147] 00:19:30.859 lat (msec) : >=2000=100.00% 00:19:30.859 cpu : usr=0.00%, sys=0.16%, ctx=87, majf=0, minf=5889 00:19:30.859 IO depths : 1=4.3%, 2=8.7%, 4=17.4%, 8=34.8%, 16=34.8%, 32=0.0%, >=64=0.0% 00:19:30.859 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.859 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:19:30.859 issued rwts: total=23,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.859 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:30.859 job1: (groupid=0, jobs=1): err= 0: pid=2877030: Mon Jul 15 14:54:02 2024 00:19:30.859 read: IOPS=27, BW=27.1MiB/s (28.4MB/s)(274MiB/10105msec) 00:19:30.859 slat (usec): min=62, max=2191.6k, avg=36725.59, stdev=213423.41 00:19:30.859 clat (msec): min=40, max=8710, avg=3013.21, stdev=1569.49 00:19:30.859 lat (msec): min=1247, max=8726, avg=3049.94, stdev=1584.96 00:19:30.859 clat percentiles (msec): 00:19:30.859 | 1.00th=[ 1250], 5.00th=[ 1301], 10.00th=[ 1334], 20.00th=[ 1351], 00:19:30.859 | 30.00th=[ 1401], 40.00th=[ 1502], 50.00th=[ 3708], 60.00th=[ 3943], 00:19:30.859 | 70.00th=[ 4212], 80.00th=[ 4665], 90.00th=[ 4866], 95.00th=[ 5000], 00:19:30.859 | 99.00th=[ 5067], 99.50th=[ 6611], 99.90th=[ 8658], 99.95th=[ 8658], 00:19:30.859 | 99.99th=[ 8658] 00:19:30.859 bw ( KiB/s): min=10240, max=112640, per=1.78%, avg=59801.60, stdev=46259.42, samples=5 00:19:30.859 iops : min= 10, max= 110, avg=58.40, stdev=45.18, samples=5 00:19:30.859 lat (msec) : 50=0.36%, 2000=43.43%, >=2000=56.20% 00:19:30.860 cpu : usr=0.01%, sys=0.77%, ctx=566, majf=0, minf=32769 00:19:30.860 IO depths : 1=0.4%, 2=0.7%, 4=1.5%, 8=2.9%, 16=5.8%, 32=11.7%, >=64=77.0% 00:19:30.860 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.860 complete : 0=0.0%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.7% 00:19:30.860 issued rwts: total=274,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.860 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:30.860 job1: (groupid=0, jobs=1): err= 0: pid=2877031: Mon Jul 15 14:54:02 2024 00:19:30.860 read: IOPS=1, BW=1106KiB/s (1132kB/s)(13.0MiB/12038msec) 00:19:30.860 slat (msec): min=4, max=2130, avg=772.53, stdev=978.17 00:19:30.860 clat (msec): min=1994, max=12004, avg=6745.31, stdev=3626.12 00:19:30.860 lat (msec): min=2110, max=12037, avg=7517.85, stdev=3599.36 00:19:30.860 clat percentiles (msec): 00:19:30.860 | 1.00th=[ 1989], 5.00th=[ 1989], 10.00th=[ 2106], 20.00th=[ 4212], 00:19:30.860 | 30.00th=[ 4279], 40.00th=[ 4279], 50.00th=[ 6342], 60.00th=[ 6409], 00:19:30.860 | 70.00th=[10671], 80.00th=[10671], 90.00th=[11879], 95.00th=[12013], 00:19:30.860 | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:19:30.860 | 99.99th=[12013] 00:19:30.860 lat (msec) : 2000=7.69%, >=2000=92.31% 00:19:30.860 cpu : usr=0.00%, sys=0.06%, ctx=62, majf=0, minf=3329 00:19:30.860 IO depths : 1=7.7%, 2=15.4%, 4=30.8%, 8=46.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:30.860 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.860 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.860 issued rwts: total=13,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.860 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:30.860 job1: (groupid=0, jobs=1): err= 0: pid=2877032: Mon Jul 15 14:54:02 2024 00:19:30.860 read: IOPS=124, BW=125MiB/s (131MB/s)(1253MiB/10058msec) 00:19:30.860 slat (usec): min=29, max=1654.1k, avg=7977.34, stdev=51776.70 00:19:30.860 clat (msec): min=56, max=3854, avg=950.37, stdev=905.67 00:19:30.860 lat (msec): min=120, max=4578, avg=958.35, stdev=911.85 00:19:30.860 clat percentiles (msec): 00:19:30.860 | 1.00th=[ 131], 5.00th=[ 359], 10.00th=[ 414], 20.00th=[ 506], 00:19:30.860 | 30.00th=[ 523], 40.00th=[ 584], 50.00th=[ 676], 60.00th=[ 743], 00:19:30.860 | 70.00th=[ 818], 80.00th=[ 911], 90.00th=[ 2836], 95.00th=[ 3675], 00:19:30.860 | 99.00th=[ 3809], 99.50th=[ 3842], 99.90th=[ 3842], 99.95th=[ 3842], 00:19:30.860 | 99.99th=[ 3842] 00:19:30.860 bw ( KiB/s): min=43008, max=311296, per=4.91%, avg=164691.43, stdev=77741.08, samples=14 00:19:30.860 iops : min= 42, max= 304, avg=160.79, stdev=75.91, samples=14 00:19:30.860 lat (msec) : 100=0.08%, 250=3.27%, 500=14.45%, 750=42.94%, 1000=22.35% 00:19:30.860 lat (msec) : 2000=6.62%, >=2000=10.30% 00:19:30.860 cpu : usr=0.01%, sys=1.85%, ctx=1444, majf=0, minf=32769 00:19:30.860 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.3%, 32=2.6%, >=64=95.0% 00:19:30.860 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.860 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:30.860 issued rwts: total=1253,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.860 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:30.860 job1: (groupid=0, jobs=1): err= 0: pid=2877033: Mon Jul 15 14:54:02 2024 00:19:30.860 read: IOPS=19, BW=19.7MiB/s (20.6MB/s)(237MiB/12049msec) 00:19:30.860 slat (usec): min=30, max=2075.9k, avg=42402.20, stdev=223391.33 00:19:30.860 clat (msec): min=694, max=10077, avg=5592.05, stdev=3621.39 00:19:30.860 lat (msec): min=695, max=10078, avg=5634.45, stdev=3616.90 00:19:30.860 clat percentiles (msec): 00:19:30.860 | 1.00th=[ 735], 5.00th=[ 751], 10.00th=[ 768], 20.00th=[ 852], 00:19:30.860 | 30.00th=[ 2400], 40.00th=[ 4010], 50.00th=[ 5940], 60.00th=[ 7752], 00:19:30.860 | 70.00th=[ 9731], 80.00th=[ 9866], 90.00th=[10000], 95.00th=[10000], 00:19:30.860 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:19:30.860 | 99.99th=[10134] 00:19:30.860 bw ( KiB/s): min= 1426, max=112640, per=0.84%, avg=28082.25, stdev=35483.59, samples=8 00:19:30.860 iops : min= 1, max= 110, avg=27.38, stdev=34.69, samples=8 00:19:30.860 lat (msec) : 750=1.27%, 1000=19.41%, 2000=2.95%, >=2000=76.37% 00:19:30.860 cpu : usr=0.02%, sys=0.81%, ctx=366, majf=0, minf=32769 00:19:30.860 IO depths : 1=0.4%, 2=0.8%, 4=1.7%, 8=3.4%, 16=6.8%, 32=13.5%, >=64=73.4% 00:19:30.860 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.860 complete : 0=0.0%, 4=99.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.9% 00:19:30.860 issued rwts: total=237,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.860 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:30.860 job1: (groupid=0, jobs=1): err= 0: pid=2877034: Mon Jul 15 14:54:02 2024 00:19:30.860 read: IOPS=2, BW=2777KiB/s (2844kB/s)(33.0MiB/12168msec) 00:19:30.860 slat (usec): min=895, max=4194.8k, avg=367116.49, stdev=942847.22 00:19:30.860 clat (msec): min=52, max=12165, avg=10852.38, stdev=2882.44 00:19:30.860 lat (msec): min=4247, max=12167, avg=11219.50, stdev=2139.78 00:19:30.860 clat percentiles (msec): 00:19:30.860 | 1.00th=[ 53], 5.00th=[ 4245], 10.00th=[ 6477], 20.00th=[10671], 00:19:30.860 | 30.00th=[12013], 40.00th=[12147], 50.00th=[12147], 60.00th=[12147], 00:19:30.860 | 70.00th=[12147], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:19:30.860 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:19:30.860 | 99.99th=[12147] 00:19:30.860 lat (msec) : 100=3.03%, >=2000=96.97% 00:19:30.860 cpu : usr=0.00%, sys=0.22%, ctx=98, majf=0, minf=8449 00:19:30.860 IO depths : 1=3.0%, 2=6.1%, 4=12.1%, 8=24.2%, 16=48.5%, 32=6.1%, >=64=0.0% 00:19:30.860 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.860 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:30.860 issued rwts: total=33,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.860 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:30.860 job1: (groupid=0, jobs=1): err= 0: pid=2877035: Mon Jul 15 14:54:02 2024 00:19:30.860 read: IOPS=255, BW=255MiB/s (268MB/s)(3079MiB/12052msec) 00:19:30.860 slat (usec): min=34, max=2086.1k, avg=3891.54, stdev=64096.60 00:19:30.860 clat (msec): min=57, max=6599, avg=484.00, stdev=1210.33 00:19:30.860 lat (msec): min=120, max=6599, avg=487.89, stdev=1215.11 00:19:30.860 clat percentiles (msec): 00:19:30.860 | 1.00th=[ 121], 5.00th=[ 122], 10.00th=[ 122], 20.00th=[ 122], 00:19:30.860 | 30.00th=[ 123], 40.00th=[ 123], 50.00th=[ 123], 60.00th=[ 124], 00:19:30.860 | 70.00th=[ 397], 80.00th=[ 405], 90.00th=[ 535], 95.00th=[ 609], 00:19:30.860 | 99.00th=[ 6544], 99.50th=[ 6611], 99.90th=[ 6611], 99.95th=[ 6611], 00:19:30.860 | 99.99th=[ 6611] 00:19:30.860 bw ( KiB/s): min=13260, max=1075200, per=13.86%, avg=464498.15, stdev=422988.84, samples=13 00:19:30.860 iops : min= 12, max= 1050, avg=453.54, stdev=413.16, samples=13 00:19:30.860 lat (msec) : 100=0.03%, 250=64.86%, 500=21.05%, 750=9.19%, >=2000=4.87% 00:19:30.860 cpu : usr=0.16%, sys=2.35%, ctx=2888, majf=0, minf=32769 00:19:30.860 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:19:30.860 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.860 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:30.860 issued rwts: total=3079,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.860 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:30.860 job1: (groupid=0, jobs=1): err= 0: pid=2877036: Mon Jul 15 14:54:02 2024 00:19:30.860 read: IOPS=1, BW=1099KiB/s (1126kB/s)(13.0MiB/12108msec) 00:19:30.860 slat (msec): min=6, max=2143, avg=769.50, stdev=1003.18 00:19:30.860 clat (msec): min=2104, max=12101, avg=8995.98, stdev=3524.48 00:19:30.860 lat (msec): min=4242, max=12107, avg=9765.49, stdev=2937.54 00:19:30.860 clat percentiles (msec): 00:19:30.860 | 1.00th=[ 2106], 5.00th=[ 2106], 10.00th=[ 4245], 20.00th=[ 6342], 00:19:30.860 | 30.00th=[ 6409], 40.00th=[ 8557], 50.00th=[10671], 60.00th=[11879], 00:19:30.860 | 70.00th=[12013], 80.00th=[12013], 90.00th=[12013], 95.00th=[12147], 00:19:30.860 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:19:30.860 | 99.99th=[12147] 00:19:30.860 lat (msec) : >=2000=100.00% 00:19:30.860 cpu : usr=0.00%, sys=0.07%, ctx=80, majf=0, minf=3329 00:19:30.860 IO depths : 1=7.7%, 2=15.4%, 4=30.8%, 8=46.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:30.860 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.860 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.860 issued rwts: total=13,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.860 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:30.860 job1: (groupid=0, jobs=1): err= 0: pid=2877037: Mon Jul 15 14:54:02 2024 00:19:30.860 read: IOPS=1, BW=1351KiB/s (1384kB/s)(16.0MiB/12126msec) 00:19:30.860 slat (usec): min=1329, max=3391.4k, avg=625727.99, stdev=1114422.76 00:19:30.860 clat (msec): min=2113, max=12124, avg=8967.89, stdev=3543.02 00:19:30.860 lat (msec): min=4266, max=12125, avg=9593.62, stdev=3109.40 00:19:30.860 clat percentiles (msec): 00:19:30.860 | 1.00th=[ 2123], 5.00th=[ 2123], 10.00th=[ 4279], 20.00th=[ 6342], 00:19:30.860 | 30.00th=[ 6409], 40.00th=[ 8490], 50.00th=[ 8557], 60.00th=[12013], 00:19:30.860 | 70.00th=[12013], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:19:30.860 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:19:30.860 | 99.99th=[12147] 00:19:30.860 lat (msec) : >=2000=100.00% 00:19:30.860 cpu : usr=0.00%, sys=0.10%, ctx=83, majf=0, minf=4097 00:19:30.860 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:19:30.860 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.860 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.860 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.860 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:30.860 job2: (groupid=0, jobs=1): err= 0: pid=2877038: Mon Jul 15 14:54:02 2024 00:19:30.860 read: IOPS=104, BW=104MiB/s (109MB/s)(1257MiB/12070msec) 00:19:30.860 slat (usec): min=38, max=3280.5k, avg=9551.09, stdev=94012.23 00:19:30.860 clat (msec): min=59, max=4159, avg=1125.89, stdev=894.39 00:19:30.860 lat (msec): min=519, max=4161, avg=1135.44, stdev=896.39 00:19:30.860 clat percentiles (msec): 00:19:30.860 | 1.00th=[ 518], 5.00th=[ 550], 10.00th=[ 634], 20.00th=[ 667], 00:19:30.860 | 30.00th=[ 709], 40.00th=[ 768], 50.00th=[ 810], 60.00th=[ 860], 00:19:30.860 | 70.00th=[ 927], 80.00th=[ 1183], 90.00th=[ 3339], 95.00th=[ 3708], 00:19:30.860 | 99.00th=[ 4010], 99.50th=[ 4077], 99.90th=[ 4144], 99.95th=[ 4144], 00:19:30.860 | 99.99th=[ 4144] 00:19:30.860 bw ( KiB/s): min=71680, max=221184, per=4.60%, avg=154146.13, stdev=48374.60, samples=15 00:19:30.860 iops : min= 70, max= 216, avg=150.53, stdev=47.24, samples=15 00:19:30.860 lat (msec) : 100=0.08%, 750=38.42%, 1000=35.08%, 2000=16.31%, >=2000=10.10% 00:19:30.860 cpu : usr=0.02%, sys=1.23%, ctx=1481, majf=0, minf=32769 00:19:30.860 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.3%, 32=2.5%, >=64=95.0% 00:19:30.860 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.860 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:30.860 issued rwts: total=1257,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.860 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:30.860 job2: (groupid=0, jobs=1): err= 0: pid=2877039: Mon Jul 15 14:54:02 2024 00:19:30.860 read: IOPS=41, BW=41.0MiB/s (43.0MB/s)(495MiB/12063msec) 00:19:30.861 slat (usec): min=39, max=2088.5k, avg=24251.57, stdev=178839.99 00:19:30.861 clat (msec): min=55, max=8951, avg=2828.05, stdev=3315.19 00:19:30.861 lat (msec): min=498, max=8989, avg=2852.30, stdev=3321.04 00:19:30.861 clat percentiles (msec): 00:19:30.861 | 1.00th=[ 498], 5.00th=[ 502], 10.00th=[ 514], 20.00th=[ 558], 00:19:30.861 | 30.00th=[ 642], 40.00th=[ 709], 50.00th=[ 760], 60.00th=[ 1351], 00:19:30.861 | 70.00th=[ 1770], 80.00th=[ 8557], 90.00th=[ 8792], 95.00th=[ 8926], 00:19:30.861 | 99.00th=[ 8926], 99.50th=[ 8926], 99.90th=[ 8926], 99.95th=[ 8926], 00:19:30.861 | 99.99th=[ 8926] 00:19:30.861 bw ( KiB/s): min= 2048, max=243712, per=2.80%, avg=93952.00, stdev=103467.80, samples=8 00:19:30.861 iops : min= 2, max= 238, avg=91.75, stdev=101.04, samples=8 00:19:30.861 lat (msec) : 100=0.20%, 500=6.26%, 750=41.41%, 1000=6.67%, 2000=15.96% 00:19:30.861 lat (msec) : >=2000=29.49% 00:19:30.861 cpu : usr=0.02%, sys=0.87%, ctx=699, majf=0, minf=32769 00:19:30.861 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.2%, 32=6.5%, >=64=87.3% 00:19:30.861 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.861 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:19:30.861 issued rwts: total=495,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.861 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:30.861 job2: (groupid=0, jobs=1): err= 0: pid=2877040: Mon Jul 15 14:54:02 2024 00:19:30.861 read: IOPS=26, BW=26.9MiB/s (28.2MB/s)(323MiB/12010msec) 00:19:30.861 slat (usec): min=33, max=2080.3k, avg=31071.60, stdev=202314.57 00:19:30.861 clat (msec): min=794, max=6815, avg=2831.38, stdev=1865.92 00:19:30.861 lat (msec): min=796, max=6817, avg=2862.45, stdev=1875.82 00:19:30.861 clat percentiles (msec): 00:19:30.861 | 1.00th=[ 793], 5.00th=[ 810], 10.00th=[ 818], 20.00th=[ 827], 00:19:30.861 | 30.00th=[ 978], 40.00th=[ 1385], 50.00th=[ 2802], 60.00th=[ 4010], 00:19:30.861 | 70.00th=[ 4396], 80.00th=[ 4530], 90.00th=[ 4732], 95.00th=[ 5537], 00:19:30.861 | 99.00th=[ 6812], 99.50th=[ 6812], 99.90th=[ 6812], 99.95th=[ 6812], 00:19:30.861 | 99.99th=[ 6812] 00:19:30.861 bw ( KiB/s): min= 1446, max=151552, per=1.99%, avg=66801.00, stdev=62037.99, samples=6 00:19:30.861 iops : min= 1, max= 148, avg=65.17, stdev=60.67, samples=6 00:19:30.861 lat (msec) : 1000=31.58%, 2000=12.38%, >=2000=56.04% 00:19:30.861 cpu : usr=0.01%, sys=0.77%, ctx=432, majf=0, minf=32769 00:19:30.861 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.5%, 16=5.0%, 32=9.9%, >=64=80.5% 00:19:30.861 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.861 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:19:30.861 issued rwts: total=323,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.861 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:30.861 job2: (groupid=0, jobs=1): err= 0: pid=2877041: Mon Jul 15 14:54:02 2024 00:19:30.861 read: IOPS=41, BW=41.8MiB/s (43.8MB/s)(423MiB/10123msec) 00:19:30.861 slat (usec): min=50, max=2083.5k, avg=23644.29, stdev=183292.90 00:19:30.861 clat (msec): min=118, max=8430, avg=1255.53, stdev=1885.29 00:19:30.861 lat (msec): min=129, max=8465, avg=1279.17, stdev=1916.94 00:19:30.861 clat percentiles (msec): 00:19:30.861 | 1.00th=[ 140], 5.00th=[ 234], 10.00th=[ 355], 20.00th=[ 584], 00:19:30.861 | 30.00th=[ 667], 40.00th=[ 676], 50.00th=[ 684], 60.00th=[ 701], 00:19:30.861 | 70.00th=[ 743], 80.00th=[ 810], 90.00th=[ 2937], 95.00th=[ 7080], 00:19:30.861 | 99.00th=[ 8423], 99.50th=[ 8423], 99.90th=[ 8423], 99.95th=[ 8423], 00:19:30.861 | 99.99th=[ 8423] 00:19:30.861 bw ( KiB/s): min=40960, max=194560, per=4.52%, avg=151552.00, stdev=73860.62, samples=4 00:19:30.861 iops : min= 40, max= 190, avg=148.00, stdev=72.13, samples=4 00:19:30.861 lat (msec) : 250=5.20%, 500=10.64%, 750=56.03%, 1000=16.78%, >=2000=11.35% 00:19:30.861 cpu : usr=0.01%, sys=1.19%, ctx=383, majf=0, minf=32769 00:19:30.861 IO depths : 1=0.2%, 2=0.5%, 4=0.9%, 8=1.9%, 16=3.8%, 32=7.6%, >=64=85.1% 00:19:30.861 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.861 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:19:30.861 issued rwts: total=423,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.861 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:30.861 job2: (groupid=0, jobs=1): err= 0: pid=2877042: Mon Jul 15 14:54:02 2024 00:19:30.861 read: IOPS=50, BW=50.1MiB/s (52.5MB/s)(604MiB/12054msec) 00:19:30.861 slat (usec): min=31, max=2063.8k, avg=16642.14, stdev=133353.67 00:19:30.861 clat (msec): min=746, max=8503, avg=2432.63, stdev=1802.87 00:19:30.861 lat (msec): min=749, max=8522, avg=2449.27, stdev=1816.25 00:19:30.861 clat percentiles (msec): 00:19:30.861 | 1.00th=[ 751], 5.00th=[ 768], 10.00th=[ 776], 20.00th=[ 827], 00:19:30.861 | 30.00th=[ 835], 40.00th=[ 919], 50.00th=[ 1167], 60.00th=[ 2836], 00:19:30.861 | 70.00th=[ 4279], 80.00th=[ 4732], 90.00th=[ 4799], 95.00th=[ 4866], 00:19:30.861 | 99.00th=[ 4866], 99.50th=[ 4866], 99.90th=[ 8490], 99.95th=[ 8490], 00:19:30.861 | 99.99th=[ 8490] 00:19:30.861 bw ( KiB/s): min= 1404, max=165556, per=2.24%, avg=75003.69, stdev=65056.75, samples=13 00:19:30.861 iops : min= 1, max= 161, avg=73.08, stdev=63.57, samples=13 00:19:30.861 lat (msec) : 750=1.16%, 1000=43.71%, 2000=9.93%, >=2000=45.20% 00:19:30.861 cpu : usr=0.02%, sys=1.05%, ctx=807, majf=0, minf=32769 00:19:30.861 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=2.6%, 32=5.3%, >=64=89.6% 00:19:30.861 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.861 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:19:30.861 issued rwts: total=604,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.861 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:30.861 job2: (groupid=0, jobs=1): err= 0: pid=2877043: Mon Jul 15 14:54:02 2024 00:19:30.861 read: IOPS=26, BW=26.6MiB/s (27.9MB/s)(267MiB/10051msec) 00:19:30.861 slat (usec): min=33, max=2139.2k, avg=37576.01, stdev=230103.45 00:19:30.861 clat (msec): min=16, max=8936, avg=4514.74, stdev=2738.72 00:19:30.861 lat (msec): min=61, max=8936, avg=4552.31, stdev=2737.15 00:19:30.861 clat percentiles (msec): 00:19:30.861 | 1.00th=[ 64], 5.00th=[ 894], 10.00th=[ 919], 20.00th=[ 986], 00:19:30.861 | 30.00th=[ 2937], 40.00th=[ 3910], 50.00th=[ 4329], 60.00th=[ 5201], 00:19:30.861 | 70.00th=[ 6611], 80.00th=[ 6946], 90.00th=[ 8792], 95.00th=[ 8926], 00:19:30.861 | 99.00th=[ 8926], 99.50th=[ 8926], 99.90th=[ 8926], 99.95th=[ 8926], 00:19:30.861 | 99.99th=[ 8926] 00:19:30.861 bw ( KiB/s): min= 6144, max=86016, per=1.06%, avg=35584.00, stdev=26902.46, samples=8 00:19:30.861 iops : min= 6, max= 84, avg=34.75, stdev=26.27, samples=8 00:19:30.861 lat (msec) : 20=0.37%, 100=2.25%, 250=1.87%, 1000=15.73%, 2000=1.12% 00:19:30.861 lat (msec) : >=2000=78.65% 00:19:30.861 cpu : usr=0.05%, sys=0.98%, ctx=524, majf=0, minf=32769 00:19:30.861 IO depths : 1=0.4%, 2=0.7%, 4=1.5%, 8=3.0%, 16=6.0%, 32=12.0%, >=64=76.4% 00:19:30.861 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.861 complete : 0=0.0%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.7% 00:19:30.861 issued rwts: total=267,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.861 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:30.861 job2: (groupid=0, jobs=1): err= 0: pid=2877044: Mon Jul 15 14:54:02 2024 00:19:30.861 read: IOPS=16, BW=16.6MiB/s (17.4MB/s)(167MiB/10058msec) 00:19:30.861 slat (usec): min=75, max=2107.7k, avg=59892.21, stdev=288816.60 00:19:30.861 clat (msec): min=54, max=8320, avg=2674.23, stdev=1702.87 00:19:30.861 lat (msec): min=61, max=8477, avg=2734.12, stdev=1763.38 00:19:30.861 clat percentiles (msec): 00:19:30.861 | 1.00th=[ 62], 5.00th=[ 1469], 10.00th=[ 1552], 20.00th=[ 1653], 00:19:30.861 | 30.00th=[ 1770], 40.00th=[ 1888], 50.00th=[ 2005], 60.00th=[ 2140], 00:19:30.861 | 70.00th=[ 2265], 80.00th=[ 4463], 90.00th=[ 4799], 95.00th=[ 6409], 00:19:30.861 | 99.00th=[ 8288], 99.50th=[ 8288], 99.90th=[ 8288], 99.95th=[ 8288], 00:19:30.861 | 99.99th=[ 8288] 00:19:30.861 bw ( KiB/s): min=12288, max=53248, per=0.81%, avg=27306.67, stdev=22559.01, samples=3 00:19:30.861 iops : min= 12, max= 52, avg=26.67, stdev=22.03, samples=3 00:19:30.861 lat (msec) : 100=1.20%, 250=2.40%, 2000=43.71%, >=2000=52.69% 00:19:30.861 cpu : usr=0.00%, sys=0.61%, ctx=238, majf=0, minf=32769 00:19:30.861 IO depths : 1=0.6%, 2=1.2%, 4=2.4%, 8=4.8%, 16=9.6%, 32=19.2%, >=64=62.3% 00:19:30.861 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.861 complete : 0=0.0%, 4=97.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=2.4% 00:19:30.861 issued rwts: total=167,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.861 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:30.861 job2: (groupid=0, jobs=1): err= 0: pid=2877045: Mon Jul 15 14:54:02 2024 00:19:30.861 read: IOPS=7, BW=7455KiB/s (7634kB/s)(88.0MiB/12088msec) 00:19:30.861 slat (usec): min=669, max=2107.6k, avg=136683.70, stdev=457311.46 00:19:30.861 clat (msec): min=59, max=12010, avg=7548.00, stdev=2153.94 00:19:30.861 lat (msec): min=2089, max=12087, avg=7684.69, stdev=2052.52 00:19:30.861 clat percentiles (msec): 00:19:30.861 | 1.00th=[ 60], 5.00th=[ 2140], 10.00th=[ 4279], 20.00th=[ 7483], 00:19:30.861 | 30.00th=[ 7617], 40.00th=[ 7752], 50.00th=[ 7953], 60.00th=[ 8154], 00:19:30.861 | 70.00th=[ 8288], 80.00th=[ 8423], 90.00th=[ 8557], 95.00th=[12013], 00:19:30.861 | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:19:30.861 | 99.99th=[12013] 00:19:30.861 lat (msec) : 100=1.14%, >=2000=98.86% 00:19:30.861 cpu : usr=0.00%, sys=0.41%, ctx=272, majf=0, minf=22529 00:19:30.861 IO depths : 1=1.1%, 2=2.3%, 4=4.5%, 8=9.1%, 16=18.2%, 32=36.4%, >=64=28.4% 00:19:30.861 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.861 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:19:30.861 issued rwts: total=88,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.861 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:30.861 job2: (groupid=0, jobs=1): err= 0: pid=2877046: Mon Jul 15 14:54:02 2024 00:19:30.861 read: IOPS=9, BW=9922KiB/s (10.2MB/s)(117MiB/12075msec) 00:19:30.861 slat (usec): min=531, max=2088.1k, avg=102669.69, stdev=419124.06 00:19:30.861 clat (msec): min=61, max=12043, avg=9403.09, stdev=3591.67 00:19:30.861 lat (msec): min=2081, max=12074, avg=9505.76, stdev=3492.68 00:19:30.861 clat percentiles (msec): 00:19:30.861 | 1.00th=[ 2089], 5.00th=[ 2106], 10.00th=[ 2140], 20.00th=[ 6409], 00:19:30.861 | 30.00th=[ 8557], 40.00th=[11342], 50.00th=[11476], 60.00th=[11610], 00:19:30.861 | 70.00th=[11745], 80.00th=[11745], 90.00th=[11879], 95.00th=[11879], 00:19:30.861 | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:19:30.861 | 99.99th=[12013] 00:19:30.861 lat (msec) : 100=0.85%, >=2000=99.15% 00:19:30.861 cpu : usr=0.00%, sys=0.64%, ctx=230, majf=0, minf=29953 00:19:30.861 IO depths : 1=0.9%, 2=1.7%, 4=3.4%, 8=6.8%, 16=13.7%, 32=27.4%, >=64=46.2% 00:19:30.861 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.861 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:19:30.861 issued rwts: total=117,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.861 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:30.861 job2: (groupid=0, jobs=1): err= 0: pid=2877047: Mon Jul 15 14:54:02 2024 00:19:30.861 read: IOPS=17, BW=17.8MiB/s (18.7MB/s)(216MiB/12127msec) 00:19:30.861 slat (usec): min=667, max=2150.4k, avg=46511.67, stdev=262058.61 00:19:30.861 clat (msec): min=1152, max=11204, avg=6760.95, stdev=3128.36 00:19:30.861 lat (msec): min=1162, max=11210, avg=6807.46, stdev=3123.38 00:19:30.861 clat percentiles (msec): 00:19:30.861 | 1.00th=[ 1183], 5.00th=[ 1234], 10.00th=[ 1250], 20.00th=[ 5537], 00:19:30.862 | 30.00th=[ 5604], 40.00th=[ 5671], 50.00th=[ 5738], 60.00th=[ 6074], 00:19:30.862 | 70.00th=[ 8557], 80.00th=[10939], 90.00th=[11073], 95.00th=[11208], 00:19:30.862 | 99.00th=[11208], 99.50th=[11208], 99.90th=[11208], 99.95th=[11208], 00:19:30.862 | 99.99th=[11208] 00:19:30.862 bw ( KiB/s): min= 1950, max=71680, per=0.78%, avg=26024.86, stdev=26604.98, samples=7 00:19:30.862 iops : min= 1, max= 70, avg=25.29, stdev=26.12, samples=7 00:19:30.862 lat (msec) : 2000=11.57%, >=2000=88.43% 00:19:30.862 cpu : usr=0.00%, sys=0.82%, ctx=467, majf=0, minf=32769 00:19:30.862 IO depths : 1=0.5%, 2=0.9%, 4=1.9%, 8=3.7%, 16=7.4%, 32=14.8%, >=64=70.8% 00:19:30.862 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.862 complete : 0=0.0%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.1% 00:19:30.862 issued rwts: total=216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.862 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:30.862 job2: (groupid=0, jobs=1): err= 0: pid=2877048: Mon Jul 15 14:54:02 2024 00:19:30.862 read: IOPS=10, BW=10.5MiB/s (11.0MB/s)(127MiB/12075msec) 00:19:30.862 slat (usec): min=658, max=2181.6k, avg=94634.78, stdev=409149.16 00:19:30.862 clat (msec): min=55, max=12044, avg=10860.57, stdev=1936.86 00:19:30.862 lat (msec): min=2090, max=12074, avg=10955.21, stdev=1681.50 00:19:30.862 clat percentiles (msec): 00:19:30.862 | 1.00th=[ 2089], 5.00th=[ 6409], 10.00th=[10805], 20.00th=[10939], 00:19:30.862 | 30.00th=[11073], 40.00th=[11208], 50.00th=[11342], 60.00th=[11476], 00:19:30.862 | 70.00th=[11610], 80.00th=[11745], 90.00th=[11879], 95.00th=[11879], 00:19:30.862 | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:19:30.862 | 99.99th=[12013] 00:19:30.862 lat (msec) : 100=0.79%, >=2000=99.21% 00:19:30.862 cpu : usr=0.00%, sys=0.77%, ctx=345, majf=0, minf=32513 00:19:30.862 IO depths : 1=0.8%, 2=1.6%, 4=3.1%, 8=6.3%, 16=12.6%, 32=25.2%, >=64=50.4% 00:19:30.862 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.862 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:19:30.862 issued rwts: total=127,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.862 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:30.862 job2: (groupid=0, jobs=1): err= 0: pid=2877049: Mon Jul 15 14:54:02 2024 00:19:30.862 read: IOPS=35, BW=35.2MiB/s (36.9MB/s)(425MiB/12087msec) 00:19:30.862 slat (usec): min=84, max=2171.2k, avg=23546.41, stdev=176061.82 00:19:30.862 clat (msec): min=378, max=8583, avg=3450.69, stdev=2762.46 00:19:30.862 lat (msec): min=380, max=10607, avg=3474.23, stdev=2778.05 00:19:30.862 clat percentiles (msec): 00:19:30.862 | 1.00th=[ 380], 5.00th=[ 388], 10.00th=[ 401], 20.00th=[ 550], 00:19:30.862 | 30.00th=[ 894], 40.00th=[ 2534], 50.00th=[ 3138], 60.00th=[ 3540], 00:19:30.862 | 70.00th=[ 3775], 80.00th=[ 7148], 90.00th=[ 7550], 95.00th=[ 7752], 00:19:30.862 | 99.00th=[ 8020], 99.50th=[ 8557], 99.90th=[ 8557], 99.95th=[ 8557], 00:19:30.862 | 99.99th=[ 8557] 00:19:30.862 bw ( KiB/s): min= 2019, max=256000, per=1.82%, avg=61027.50, stdev=77157.69, samples=10 00:19:30.862 iops : min= 1, max= 250, avg=59.50, stdev=75.43, samples=10 00:19:30.862 lat (msec) : 500=18.35%, 750=6.82%, 1000=8.94%, 2000=5.18%, >=2000=60.71% 00:19:30.862 cpu : usr=0.02%, sys=0.91%, ctx=696, majf=0, minf=32769 00:19:30.862 IO depths : 1=0.2%, 2=0.5%, 4=0.9%, 8=1.9%, 16=3.8%, 32=7.5%, >=64=85.2% 00:19:30.862 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.862 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:19:30.862 issued rwts: total=425,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.862 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:30.862 job2: (groupid=0, jobs=1): err= 0: pid=2877050: Mon Jul 15 14:54:02 2024 00:19:30.862 read: IOPS=30, BW=30.5MiB/s (32.0MB/s)(369MiB/12098msec) 00:19:30.862 slat (usec): min=38, max=2124.1k, avg=32625.20, stdev=202206.64 00:19:30.862 clat (msec): min=57, max=8576, avg=3975.81, stdev=2190.30 00:19:30.862 lat (msec): min=1036, max=8582, avg=4008.43, stdev=2177.61 00:19:30.862 clat percentiles (msec): 00:19:30.862 | 1.00th=[ 1020], 5.00th=[ 1083], 10.00th=[ 1116], 20.00th=[ 1167], 00:19:30.862 | 30.00th=[ 2836], 40.00th=[ 3775], 50.00th=[ 3876], 60.00th=[ 4044], 00:19:30.862 | 70.00th=[ 6342], 80.00th=[ 6611], 90.00th=[ 6812], 95.00th=[ 6946], 00:19:30.862 | 99.00th=[ 7080], 99.50th=[ 7080], 99.90th=[ 8557], 99.95th=[ 8557], 00:19:30.862 | 99.99th=[ 8557] 00:19:30.862 bw ( KiB/s): min= 1996, max=169984, per=1.64%, avg=54817.33, stdev=50989.44, samples=9 00:19:30.862 iops : min= 1, max= 166, avg=53.33, stdev=49.87, samples=9 00:19:30.862 lat (msec) : 100=0.27%, 2000=29.27%, >=2000=70.46% 00:19:30.862 cpu : usr=0.02%, sys=0.82%, ctx=809, majf=0, minf=32769 00:19:30.862 IO depths : 1=0.3%, 2=0.5%, 4=1.1%, 8=2.2%, 16=4.3%, 32=8.7%, >=64=82.9% 00:19:30.862 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.862 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:19:30.862 issued rwts: total=369,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.862 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:30.862 job3: (groupid=0, jobs=1): err= 0: pid=2877051: Mon Jul 15 14:54:02 2024 00:19:30.862 read: IOPS=26, BW=26.8MiB/s (28.1MB/s)(324MiB/12084msec) 00:19:30.862 slat (usec): min=44, max=2137.8k, avg=30872.95, stdev=230242.28 00:19:30.862 clat (msec): min=519, max=11162, avg=4597.05, stdev=4713.58 00:19:30.862 lat (msec): min=522, max=11166, avg=4627.93, stdev=4723.50 00:19:30.862 clat percentiles (msec): 00:19:30.862 | 1.00th=[ 523], 5.00th=[ 523], 10.00th=[ 527], 20.00th=[ 527], 00:19:30.862 | 30.00th=[ 531], 40.00th=[ 558], 50.00th=[ 575], 60.00th=[ 4866], 00:19:30.862 | 70.00th=[10805], 80.00th=[10939], 90.00th=[11073], 95.00th=[11073], 00:19:30.862 | 99.00th=[11208], 99.50th=[11208], 99.90th=[11208], 99.95th=[11208], 00:19:30.862 | 99.99th=[11208] 00:19:30.862 bw ( KiB/s): min= 8192, max=188416, per=2.01%, avg=67296.67, stdev=84249.56, samples=6 00:19:30.862 iops : min= 8, max= 184, avg=65.67, stdev=82.20, samples=6 00:19:30.862 lat (msec) : 750=52.78%, >=2000=47.22% 00:19:30.862 cpu : usr=0.00%, sys=0.88%, ctx=303, majf=0, minf=32769 00:19:30.862 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.5%, 16=4.9%, 32=9.9%, >=64=80.6% 00:19:30.862 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.862 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:19:30.862 issued rwts: total=324,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.862 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:30.862 job3: (groupid=0, jobs=1): err= 0: pid=2877052: Mon Jul 15 14:54:02 2024 00:19:30.862 read: IOPS=119, BW=119MiB/s (125MB/s)(1199MiB/10066msec) 00:19:30.862 slat (usec): min=37, max=2077.9k, avg=8335.36, stdev=71916.61 00:19:30.862 clat (msec): min=64, max=6364, avg=1031.44, stdev=1115.77 00:19:30.862 lat (msec): min=66, max=6451, avg=1039.78, stdev=1125.37 00:19:30.862 clat percentiles (msec): 00:19:30.862 | 1.00th=[ 153], 5.00th=[ 393], 10.00th=[ 393], 20.00th=[ 397], 00:19:30.862 | 30.00th=[ 397], 40.00th=[ 414], 50.00th=[ 481], 60.00th=[ 527], 00:19:30.862 | 70.00th=[ 567], 80.00th=[ 2005], 90.00th=[ 2970], 95.00th=[ 3339], 00:19:30.862 | 99.00th=[ 4665], 99.50th=[ 4799], 99.90th=[ 4933], 99.95th=[ 6342], 00:19:30.862 | 99.99th=[ 6342] 00:19:30.862 bw ( KiB/s): min= 2043, max=331776, per=4.68%, avg=156817.93, stdev=124125.85, samples=14 00:19:30.862 iops : min= 1, max= 324, avg=153.07, stdev=121.31, samples=14 00:19:30.862 lat (msec) : 100=0.42%, 250=1.58%, 500=49.21%, 750=23.44%, 1000=1.25% 00:19:30.862 lat (msec) : 2000=4.17%, >=2000=19.93% 00:19:30.862 cpu : usr=0.07%, sys=1.85%, ctx=1326, majf=0, minf=32769 00:19:30.862 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.3%, 32=2.7%, >=64=94.7% 00:19:30.862 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.862 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:30.862 issued rwts: total=1199,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.862 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:30.862 job3: (groupid=0, jobs=1): err= 0: pid=2877053: Mon Jul 15 14:54:02 2024 00:19:30.862 read: IOPS=98, BW=98.8MiB/s (104MB/s)(995MiB/10074msec) 00:19:30.862 slat (usec): min=47, max=2043.1k, avg=10047.61, stdev=71914.17 00:19:30.862 clat (msec): min=71, max=5353, avg=1247.54, stdev=1387.21 00:19:30.862 lat (msec): min=80, max=5354, avg=1257.59, stdev=1393.68 00:19:30.862 clat percentiles (msec): 00:19:30.862 | 1.00th=[ 146], 5.00th=[ 384], 10.00th=[ 426], 20.00th=[ 493], 00:19:30.862 | 30.00th=[ 518], 40.00th=[ 567], 50.00th=[ 684], 60.00th=[ 894], 00:19:30.862 | 70.00th=[ 995], 80.00th=[ 1200], 90.00th=[ 4396], 95.00th=[ 4866], 00:19:30.862 | 99.00th=[ 5269], 99.50th=[ 5269], 99.90th=[ 5336], 99.95th=[ 5336], 00:19:30.862 | 99.99th=[ 5336] 00:19:30.862 bw ( KiB/s): min=10240, max=253952, per=3.54%, avg=118510.47, stdev=86051.59, samples=15 00:19:30.862 iops : min= 10, max= 248, avg=115.67, stdev=83.93, samples=15 00:19:30.862 lat (msec) : 100=0.60%, 250=1.01%, 500=23.92%, 750=27.74%, 1000=18.09% 00:19:30.862 lat (msec) : 2000=15.08%, >=2000=13.57% 00:19:30.863 cpu : usr=0.03%, sys=1.74%, ctx=1355, majf=0, minf=32769 00:19:30.863 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.2%, >=64=93.7% 00:19:30.863 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.863 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:30.863 issued rwts: total=995,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.863 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:30.863 job3: (groupid=0, jobs=1): err= 0: pid=2877054: Mon Jul 15 14:54:02 2024 00:19:30.863 read: IOPS=175, BW=175MiB/s (184MB/s)(1767MiB/10073msec) 00:19:30.863 slat (usec): min=39, max=1965.6k, avg=5685.66, stdev=47180.05 00:19:30.863 clat (msec): min=14, max=3068, avg=692.74, stdev=659.68 00:19:30.863 lat (msec): min=73, max=3070, avg=698.43, stdev=662.42 00:19:30.863 clat percentiles (msec): 00:19:30.863 | 1.00th=[ 186], 5.00th=[ 255], 10.00th=[ 255], 20.00th=[ 257], 00:19:30.863 | 30.00th=[ 259], 40.00th=[ 266], 50.00th=[ 634], 60.00th=[ 676], 00:19:30.863 | 70.00th=[ 743], 80.00th=[ 911], 90.00th=[ 1083], 95.00th=[ 2735], 00:19:30.863 | 99.00th=[ 3004], 99.50th=[ 3071], 99.90th=[ 3071], 99.95th=[ 3071], 00:19:30.863 | 99.99th=[ 3071] 00:19:30.863 bw ( KiB/s): min=40960, max=503808, per=6.30%, avg=211353.60, stdev=158007.64, samples=15 00:19:30.863 iops : min= 40, max= 492, avg=206.40, stdev=154.30, samples=15 00:19:30.863 lat (msec) : 20=0.06%, 100=0.23%, 250=1.47%, 500=46.01%, 750=22.98% 00:19:30.863 lat (msec) : 1000=14.37%, 2000=7.70%, >=2000=7.19% 00:19:30.863 cpu : usr=0.11%, sys=2.42%, ctx=1994, majf=0, minf=32769 00:19:30.863 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.8%, >=64=96.4% 00:19:30.863 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.863 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:30.863 issued rwts: total=1767,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.863 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:30.863 job3: (groupid=0, jobs=1): err= 0: pid=2877055: Mon Jul 15 14:54:02 2024 00:19:30.863 read: IOPS=5, BW=5480KiB/s (5612kB/s)(54.0MiB/10090msec) 00:19:30.863 slat (usec): min=430, max=2112.3k, avg=185305.49, stdev=563461.41 00:19:30.863 clat (msec): min=83, max=10087, avg=6142.28, stdev=4309.47 00:19:30.863 lat (msec): min=90, max=10089, avg=6327.59, stdev=4258.87 00:19:30.863 clat percentiles (msec): 00:19:30.863 | 1.00th=[ 84], 5.00th=[ 97], 10.00th=[ 110], 20.00th=[ 123], 00:19:30.863 | 30.00th=[ 2265], 40.00th=[ 4396], 50.00th=[ 8658], 60.00th=[10000], 00:19:30.863 | 70.00th=[10000], 80.00th=[10134], 90.00th=[10134], 95.00th=[10134], 00:19:30.863 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:19:30.863 | 99.99th=[10134] 00:19:30.863 lat (msec) : 100=7.41%, 250=14.81%, >=2000=77.78% 00:19:30.863 cpu : usr=0.00%, sys=0.39%, ctx=95, majf=0, minf=13825 00:19:30.863 IO depths : 1=1.9%, 2=3.7%, 4=7.4%, 8=14.8%, 16=29.6%, 32=42.6%, >=64=0.0% 00:19:30.863 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.863 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:30.863 issued rwts: total=54,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.863 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:30.863 job3: (groupid=0, jobs=1): err= 0: pid=2877056: Mon Jul 15 14:54:02 2024 00:19:30.863 read: IOPS=9, BW=9406KiB/s (9632kB/s)(111MiB/12084msec) 00:19:30.863 slat (usec): min=601, max=2112.2k, avg=90113.47, stdev=361624.03 00:19:30.863 clat (msec): min=2080, max=12082, avg=5636.65, stdev=3743.59 00:19:30.863 lat (msec): min=2093, max=12083, avg=5726.76, stdev=3777.45 00:19:30.863 clat percentiles (msec): 00:19:30.863 | 1.00th=[ 2089], 5.00th=[ 2869], 10.00th=[ 2937], 20.00th=[ 3071], 00:19:30.863 | 30.00th=[ 3171], 40.00th=[ 3339], 50.00th=[ 3641], 60.00th=[ 3910], 00:19:30.863 | 70.00th=[ 4279], 80.00th=[12013], 90.00th=[12013], 95.00th=[12013], 00:19:30.863 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:19:30.863 | 99.99th=[12147] 00:19:30.863 lat (msec) : >=2000=100.00% 00:19:30.863 cpu : usr=0.00%, sys=0.50%, ctx=331, majf=0, minf=28417 00:19:30.863 IO depths : 1=0.9%, 2=1.8%, 4=3.6%, 8=7.2%, 16=14.4%, 32=28.8%, >=64=43.2% 00:19:30.863 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.863 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:19:30.863 issued rwts: total=111,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.863 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:30.863 job3: (groupid=0, jobs=1): err= 0: pid=2877057: Mon Jul 15 14:54:02 2024 00:19:30.863 read: IOPS=18, BW=19.0MiB/s (19.9MB/s)(191MiB/10072msec) 00:19:30.863 slat (usec): min=359, max=2072.8k, avg=52400.21, stdev=269780.74 00:19:30.863 clat (msec): min=62, max=9725, avg=3654.65, stdev=3810.51 00:19:30.863 lat (msec): min=71, max=9753, avg=3707.05, stdev=3828.51 00:19:30.863 clat percentiles (msec): 00:19:30.863 | 1.00th=[ 72], 5.00th=[ 150], 10.00th=[ 279], 20.00th=[ 550], 00:19:30.863 | 30.00th=[ 885], 40.00th=[ 1200], 50.00th=[ 1519], 60.00th=[ 1989], 00:19:30.863 | 70.00th=[ 6342], 80.00th=[ 9194], 90.00th=[ 9597], 95.00th=[ 9597], 00:19:30.863 | 99.00th=[ 9731], 99.50th=[ 9731], 99.90th=[ 9731], 99.95th=[ 9731], 00:19:30.863 | 99.99th=[ 9731] 00:19:30.863 bw ( KiB/s): min=51200, max=77824, per=1.92%, avg=64512.00, stdev=18826.01, samples=2 00:19:30.863 iops : min= 50, max= 76, avg=63.00, stdev=18.38, samples=2 00:19:30.863 lat (msec) : 100=3.14%, 250=5.24%, 500=9.95%, 750=7.33%, 1000=7.33% 00:19:30.863 lat (msec) : 2000=27.23%, >=2000=39.79% 00:19:30.863 cpu : usr=0.00%, sys=0.91%, ctx=484, majf=0, minf=32769 00:19:30.863 IO depths : 1=0.5%, 2=1.0%, 4=2.1%, 8=4.2%, 16=8.4%, 32=16.8%, >=64=67.0% 00:19:30.863 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.863 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.5% 00:19:30.863 issued rwts: total=191,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.863 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:30.863 job3: (groupid=0, jobs=1): err= 0: pid=2877058: Mon Jul 15 14:54:02 2024 00:19:30.863 read: IOPS=13, BW=13.2MiB/s (13.9MB/s)(133MiB/10053msec) 00:19:30.863 slat (usec): min=392, max=2106.2k, avg=75216.61, stdev=316628.64 00:19:30.863 clat (msec): min=48, max=10002, avg=6125.06, stdev=3102.51 00:19:30.863 lat (msec): min=85, max=10003, avg=6200.27, stdev=3074.96 00:19:30.863 clat percentiles (msec): 00:19:30.863 | 1.00th=[ 86], 5.00th=[ 110], 10.00th=[ 1770], 20.00th=[ 2022], 00:19:30.863 | 30.00th=[ 4396], 40.00th=[ 7684], 50.00th=[ 7886], 60.00th=[ 8087], 00:19:30.863 | 70.00th=[ 8221], 80.00th=[ 8423], 90.00th=[ 8658], 95.00th=[ 8658], 00:19:30.863 | 99.00th=[10000], 99.50th=[10000], 99.90th=[10000], 99.95th=[10000], 00:19:30.863 | 99.99th=[10000] 00:19:30.863 bw ( KiB/s): min=12288, max=12288, per=0.37%, avg=12288.00, stdev= 0.00, samples=1 00:19:30.863 iops : min= 12, max= 12, avg=12.00, stdev= 0.00, samples=1 00:19:30.863 lat (msec) : 50=0.75%, 100=2.26%, 250=4.51%, 2000=9.77%, >=2000=82.71% 00:19:30.863 cpu : usr=0.00%, sys=0.72%, ctx=333, majf=0, minf=32769 00:19:30.863 IO depths : 1=0.8%, 2=1.5%, 4=3.0%, 8=6.0%, 16=12.0%, 32=24.1%, >=64=52.6% 00:19:30.863 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.863 complete : 0=0.0%, 4=85.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=14.3% 00:19:30.863 issued rwts: total=133,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.863 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:30.863 job3: (groupid=0, jobs=1): err= 0: pid=2877059: Mon Jul 15 14:54:02 2024 00:19:30.863 read: IOPS=38, BW=38.6MiB/s (40.4MB/s)(465MiB/12056msec) 00:19:30.863 slat (usec): min=84, max=2047.9k, avg=25777.73, stdev=175835.09 00:19:30.863 clat (msec): min=66, max=8466, avg=2925.28, stdev=2944.87 00:19:30.863 lat (msec): min=742, max=8471, avg=2951.06, stdev=2951.75 00:19:30.863 clat percentiles (msec): 00:19:30.863 | 1.00th=[ 743], 5.00th=[ 768], 10.00th=[ 810], 20.00th=[ 885], 00:19:30.863 | 30.00th=[ 919], 40.00th=[ 936], 50.00th=[ 969], 60.00th=[ 1670], 00:19:30.863 | 70.00th=[ 4245], 80.00th=[ 6409], 90.00th=[ 8288], 95.00th=[ 8356], 00:19:30.863 | 99.00th=[ 8423], 99.50th=[ 8490], 99.90th=[ 8490], 99.95th=[ 8490], 00:19:30.863 | 99.99th=[ 8490] 00:19:30.863 bw ( KiB/s): min=11719, max=167936, per=2.56%, avg=85688.88, stdev=62842.37, samples=8 00:19:30.863 iops : min= 11, max= 164, avg=83.62, stdev=61.44, samples=8 00:19:30.863 lat (msec) : 100=0.22%, 750=2.58%, 1000=54.19%, 2000=7.31%, >=2000=35.70% 00:19:30.863 cpu : usr=0.02%, sys=0.85%, ctx=885, majf=0, minf=32769 00:19:30.863 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.7%, 16=3.4%, 32=6.9%, >=64=86.5% 00:19:30.863 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.863 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:19:30.863 issued rwts: total=465,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.863 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:30.863 job3: (groupid=0, jobs=1): err= 0: pid=2877060: Mon Jul 15 14:54:02 2024 00:19:30.863 read: IOPS=6, BW=6823KiB/s (6986kB/s)(80.0MiB/12007msec) 00:19:30.863 slat (usec): min=333, max=2083.1k, avg=125052.71, stdev=429969.82 00:19:30.863 clat (msec): min=2001, max=11960, avg=5056.37, stdev=2335.52 00:19:30.863 lat (msec): min=2006, max=12005, avg=5181.43, stdev=2435.58 00:19:30.863 clat percentiles (msec): 00:19:30.863 | 1.00th=[ 2005], 5.00th=[ 3239], 10.00th=[ 3339], 20.00th=[ 3540], 00:19:30.863 | 30.00th=[ 3641], 40.00th=[ 3809], 50.00th=[ 3977], 60.00th=[ 4144], 00:19:30.863 | 70.00th=[ 6409], 80.00th=[ 6409], 90.00th=[ 8490], 95.00th=[10671], 00:19:30.863 | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:19:30.863 | 99.99th=[12013] 00:19:30.863 lat (msec) : >=2000=100.00% 00:19:30.863 cpu : usr=0.00%, sys=0.28%, ctx=237, majf=0, minf=20481 00:19:30.863 IO depths : 1=1.2%, 2=2.5%, 4=5.0%, 8=10.0%, 16=20.0%, 32=40.0%, >=64=21.3% 00:19:30.863 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.863 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:19:30.863 issued rwts: total=80,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.863 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:30.863 job3: (groupid=0, jobs=1): err= 0: pid=2877061: Mon Jul 15 14:54:02 2024 00:19:30.863 read: IOPS=25, BW=25.3MiB/s (26.5MB/s)(255MiB/10095msec) 00:19:30.863 slat (usec): min=47, max=2114.4k, avg=39309.53, stdev=226132.89 00:19:30.863 clat (msec): min=70, max=8962, avg=2104.47, stdev=2278.95 00:19:30.863 lat (msec): min=154, max=8972, avg=2143.78, stdev=2318.59 00:19:30.863 clat percentiles (msec): 00:19:30.863 | 1.00th=[ 211], 5.00th=[ 292], 10.00th=[ 401], 20.00th=[ 567], 00:19:30.863 | 30.00th=[ 709], 40.00th=[ 927], 50.00th=[ 1183], 60.00th=[ 1334], 00:19:30.863 | 70.00th=[ 3373], 80.00th=[ 3406], 90.00th=[ 3842], 95.00th=[ 8926], 00:19:30.863 | 99.00th=[ 8926], 99.50th=[ 8926], 99.90th=[ 8926], 99.95th=[ 8926], 00:19:30.863 | 99.99th=[ 8926] 00:19:30.863 bw ( KiB/s): min=32768, max=118784, per=2.59%, avg=86698.67, stdev=46985.13, samples=3 00:19:30.863 iops : min= 32, max= 116, avg=84.67, stdev=45.88, samples=3 00:19:30.863 lat (msec) : 100=0.39%, 250=2.75%, 500=12.55%, 750=16.47%, 1000=9.41% 00:19:30.863 lat (msec) : 2000=26.67%, >=2000=31.76% 00:19:30.863 cpu : usr=0.01%, sys=0.78%, ctx=535, majf=0, minf=32769 00:19:30.863 IO depths : 1=0.4%, 2=0.8%, 4=1.6%, 8=3.1%, 16=6.3%, 32=12.5%, >=64=75.3% 00:19:30.863 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.863 complete : 0=0.0%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.8% 00:19:30.863 issued rwts: total=255,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.863 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:30.864 job3: (groupid=0, jobs=1): err= 0: pid=2877062: Mon Jul 15 14:54:02 2024 00:19:30.864 read: IOPS=32, BW=32.1MiB/s (33.6MB/s)(388MiB/12099msec) 00:19:30.864 slat (usec): min=33, max=2044.5k, avg=25778.12, stdev=155763.49 00:19:30.864 clat (msec): min=1360, max=7125, avg=3744.07, stdev=2195.40 00:19:30.864 lat (msec): min=1364, max=7129, avg=3769.85, stdev=2199.45 00:19:30.864 clat percentiles (msec): 00:19:30.864 | 1.00th=[ 1385], 5.00th=[ 1452], 10.00th=[ 1502], 20.00th=[ 1620], 00:19:30.864 | 30.00th=[ 1854], 40.00th=[ 2467], 50.00th=[ 2903], 60.00th=[ 3473], 00:19:30.864 | 70.00th=[ 6007], 80.00th=[ 6879], 90.00th=[ 7013], 95.00th=[ 7013], 00:19:30.864 | 99.00th=[ 7080], 99.50th=[ 7148], 99.90th=[ 7148], 99.95th=[ 7148], 00:19:30.864 | 99.99th=[ 7148] 00:19:30.864 bw ( KiB/s): min= 8192, max=100352, per=1.59%, avg=53452.80, stdev=35700.92, samples=10 00:19:30.864 iops : min= 8, max= 98, avg=52.20, stdev=34.86, samples=10 00:19:30.864 lat (msec) : 2000=30.93%, >=2000=69.07% 00:19:30.864 cpu : usr=0.01%, sys=0.99%, ctx=877, majf=0, minf=32769 00:19:30.864 IO depths : 1=0.3%, 2=0.5%, 4=1.0%, 8=2.1%, 16=4.1%, 32=8.2%, >=64=83.8% 00:19:30.864 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.864 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:19:30.864 issued rwts: total=388,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.864 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:30.864 job3: (groupid=0, jobs=1): err= 0: pid=2877063: Mon Jul 15 14:54:02 2024 00:19:30.864 read: IOPS=22, BW=22.1MiB/s (23.1MB/s)(222MiB/10067msec) 00:19:30.864 slat (usec): min=85, max=2052.9k, avg=45044.23, stdev=246988.13 00:19:30.864 clat (msec): min=65, max=8793, avg=5192.37, stdev=2918.68 00:19:30.864 lat (msec): min=66, max=8799, avg=5237.41, stdev=2903.05 00:19:30.864 clat percentiles (msec): 00:19:30.864 | 1.00th=[ 70], 5.00th=[ 1385], 10.00th=[ 1435], 20.00th=[ 1603], 00:19:30.864 | 30.00th=[ 2534], 40.00th=[ 4396], 50.00th=[ 6611], 60.00th=[ 6678], 00:19:30.864 | 70.00th=[ 7953], 80.00th=[ 8288], 90.00th=[ 8557], 95.00th=[ 8658], 00:19:30.864 | 99.00th=[ 8792], 99.50th=[ 8792], 99.90th=[ 8792], 99.95th=[ 8792], 00:19:30.864 | 99.99th=[ 8792] 00:19:30.864 bw ( KiB/s): min= 8175, max=43008, per=0.83%, avg=27791.86, stdev=14086.94, samples=7 00:19:30.864 iops : min= 7, max= 42, avg=27.00, stdev=13.99, samples=7 00:19:30.864 lat (msec) : 100=1.80%, 250=0.90%, 2000=18.47%, >=2000=78.83% 00:19:30.864 cpu : usr=0.00%, sys=0.93%, ctx=410, majf=0, minf=32769 00:19:30.864 IO depths : 1=0.5%, 2=0.9%, 4=1.8%, 8=3.6%, 16=7.2%, 32=14.4%, >=64=71.6% 00:19:30.864 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.864 complete : 0=0.0%, 4=99.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.0% 00:19:30.864 issued rwts: total=222,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.864 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:30.864 job4: (groupid=0, jobs=1): err= 0: pid=2877064: Mon Jul 15 14:54:02 2024 00:19:30.864 read: IOPS=213, BW=213MiB/s (224MB/s)(2172MiB/10179msec) 00:19:30.864 slat (usec): min=39, max=83722, avg=4616.51, stdev=7455.06 00:19:30.864 clat (msec): min=138, max=1516, avg=562.77, stdev=330.79 00:19:30.864 lat (msec): min=191, max=1518, avg=567.38, stdev=332.83 00:19:30.864 clat percentiles (msec): 00:19:30.864 | 1.00th=[ 253], 5.00th=[ 255], 10.00th=[ 257], 20.00th=[ 259], 00:19:30.864 | 30.00th=[ 264], 40.00th=[ 384], 50.00th=[ 468], 60.00th=[ 535], 00:19:30.864 | 70.00th=[ 701], 80.00th=[ 835], 90.00th=[ 1099], 95.00th=[ 1267], 00:19:30.864 | 99.00th=[ 1435], 99.50th=[ 1469], 99.90th=[ 1519], 99.95th=[ 1519], 00:19:30.864 | 99.99th=[ 1519] 00:19:30.864 bw ( KiB/s): min=49152, max=505856, per=6.94%, avg=232561.78, stdev=147738.87, samples=18 00:19:30.864 iops : min= 48, max= 494, avg=227.11, stdev=144.28, samples=18 00:19:30.864 lat (msec) : 250=0.23%, 500=51.75%, 750=20.90%, 1000=12.89%, 2000=14.23% 00:19:30.864 cpu : usr=0.12%, sys=2.81%, ctx=2507, majf=0, minf=32769 00:19:30.864 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.5%, >=64=97.1% 00:19:30.864 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.864 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:30.864 issued rwts: total=2172,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.864 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:30.864 job4: (groupid=0, jobs=1): err= 0: pid=2877065: Mon Jul 15 14:54:02 2024 00:19:30.864 read: IOPS=40, BW=40.2MiB/s (42.1MB/s)(405MiB/10080msec) 00:19:30.864 slat (usec): min=51, max=1921.5k, avg=24696.21, stdev=112748.29 00:19:30.864 clat (msec): min=76, max=4358, avg=2561.96, stdev=1272.08 00:19:30.864 lat (msec): min=80, max=4373, avg=2586.65, stdev=1271.69 00:19:30.864 clat percentiles (msec): 00:19:30.864 | 1.00th=[ 104], 5.00th=[ 464], 10.00th=[ 885], 20.00th=[ 1720], 00:19:30.864 | 30.00th=[ 1871], 40.00th=[ 1972], 50.00th=[ 2165], 60.00th=[ 2433], 00:19:30.864 | 70.00th=[ 4010], 80.00th=[ 4178], 90.00th=[ 4279], 95.00th=[ 4329], 00:19:30.864 | 99.00th=[ 4329], 99.50th=[ 4329], 99.90th=[ 4329], 99.95th=[ 4329], 00:19:30.864 | 99.99th=[ 4329] 00:19:30.864 bw ( KiB/s): min=12288, max=94208, per=1.54%, avg=51758.55, stdev=22267.53, samples=11 00:19:30.864 iops : min= 12, max= 92, avg=50.55, stdev=21.75, samples=11 00:19:30.864 lat (msec) : 100=0.99%, 250=1.73%, 500=3.21%, 750=2.47%, 1000=3.21% 00:19:30.864 lat (msec) : 2000=31.11%, >=2000=57.28% 00:19:30.864 cpu : usr=0.01%, sys=0.99%, ctx=1342, majf=0, minf=32769 00:19:30.864 IO depths : 1=0.2%, 2=0.5%, 4=1.0%, 8=2.0%, 16=4.0%, 32=7.9%, >=64=84.4% 00:19:30.864 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.864 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:19:30.864 issued rwts: total=405,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.864 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:30.864 job4: (groupid=0, jobs=1): err= 0: pid=2877066: Mon Jul 15 14:54:02 2024 00:19:30.864 read: IOPS=63, BW=63.3MiB/s (66.4MB/s)(640MiB/10108msec) 00:19:30.864 slat (usec): min=51, max=1193.9k, avg=15621.83, stdev=49584.06 00:19:30.864 clat (msec): min=105, max=3707, avg=1591.48, stdev=677.64 00:19:30.864 lat (msec): min=138, max=4901, avg=1607.10, stdev=684.79 00:19:30.864 clat percentiles (msec): 00:19:30.864 | 1.00th=[ 232], 5.00th=[ 651], 10.00th=[ 676], 20.00th=[ 961], 00:19:30.864 | 30.00th=[ 1200], 40.00th=[ 1435], 50.00th=[ 1653], 60.00th=[ 1804], 00:19:30.864 | 70.00th=[ 1888], 80.00th=[ 2165], 90.00th=[ 2400], 95.00th=[ 2500], 00:19:30.864 | 99.00th=[ 3574], 99.50th=[ 3641], 99.90th=[ 3708], 99.95th=[ 3708], 00:19:30.864 | 99.99th=[ 3708] 00:19:30.864 bw ( KiB/s): min=20480, max=196608, per=2.24%, avg=75044.57, stdev=51074.49, samples=14 00:19:30.864 iops : min= 20, max= 192, avg=73.29, stdev=49.88, samples=14 00:19:30.864 lat (msec) : 250=1.25%, 500=1.56%, 750=10.63%, 1000=8.12%, 2000=55.16% 00:19:30.864 lat (msec) : >=2000=23.28% 00:19:30.864 cpu : usr=0.03%, sys=1.30%, ctx=1715, majf=0, minf=32769 00:19:30.864 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.5%, 32=5.0%, >=64=90.2% 00:19:30.864 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.864 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:19:30.864 issued rwts: total=640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.864 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:30.864 job4: (groupid=0, jobs=1): err= 0: pid=2877067: Mon Jul 15 14:54:02 2024 00:19:30.864 read: IOPS=29, BW=29.2MiB/s (30.6MB/s)(294MiB/10084msec) 00:19:30.864 slat (usec): min=457, max=2099.5k, avg=34023.37, stdev=185996.99 00:19:30.864 clat (msec): min=79, max=7040, avg=1924.02, stdev=1216.31 00:19:30.864 lat (msec): min=134, max=7046, avg=1958.05, stdev=1248.70 00:19:30.864 clat percentiles (msec): 00:19:30.864 | 1.00th=[ 146], 5.00th=[ 284], 10.00th=[ 592], 20.00th=[ 1234], 00:19:30.864 | 30.00th=[ 1401], 40.00th=[ 1485], 50.00th=[ 1754], 60.00th=[ 1989], 00:19:30.864 | 70.00th=[ 2366], 80.00th=[ 2467], 90.00th=[ 2567], 95.00th=[ 3608], 00:19:30.864 | 99.00th=[ 7013], 99.50th=[ 7013], 99.90th=[ 7013], 99.95th=[ 7013], 00:19:30.864 | 99.99th=[ 7013] 00:19:30.864 bw ( KiB/s): min=30720, max=94208, per=1.70%, avg=57002.67, stdev=27580.92, samples=6 00:19:30.864 iops : min= 30, max= 92, avg=55.67, stdev=26.93, samples=6 00:19:30.864 lat (msec) : 100=0.34%, 250=1.70%, 500=6.46%, 750=3.06%, 1000=2.72% 00:19:30.864 lat (msec) : 2000=46.26%, >=2000=39.46% 00:19:30.864 cpu : usr=0.01%, sys=0.86%, ctx=876, majf=0, minf=32769 00:19:30.864 IO depths : 1=0.3%, 2=0.7%, 4=1.4%, 8=2.7%, 16=5.4%, 32=10.9%, >=64=78.6% 00:19:30.864 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.864 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:19:30.864 issued rwts: total=294,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.864 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:30.864 job4: (groupid=0, jobs=1): err= 0: pid=2877068: Mon Jul 15 14:54:02 2024 00:19:30.864 read: IOPS=70, BW=70.3MiB/s (73.7MB/s)(711MiB/10109msec) 00:19:30.864 slat (usec): min=40, max=2100.4k, avg=14078.32, stdev=91584.81 00:19:30.864 clat (msec): min=96, max=4173, avg=1321.64, stdev=856.66 00:19:30.864 lat (msec): min=136, max=5930, avg=1335.72, stdev=874.24 00:19:30.864 clat percentiles (msec): 00:19:30.864 | 1.00th=[ 150], 5.00th=[ 384], 10.00th=[ 634], 20.00th=[ 735], 00:19:30.864 | 30.00th=[ 785], 40.00th=[ 860], 50.00th=[ 944], 60.00th=[ 1284], 00:19:30.864 | 70.00th=[ 1737], 80.00th=[ 1905], 90.00th=[ 2072], 95.00th=[ 3910], 00:19:30.864 | 99.00th=[ 4144], 99.50th=[ 4178], 99.90th=[ 4178], 99.95th=[ 4178], 00:19:30.864 | 99.99th=[ 4178] 00:19:30.864 bw ( KiB/s): min=43008, max=157696, per=2.97%, avg=99498.67, stdev=43937.36, samples=12 00:19:30.864 iops : min= 42, max= 154, avg=97.17, stdev=42.91, samples=12 00:19:30.864 lat (msec) : 100=0.14%, 250=2.95%, 500=4.22%, 750=19.27%, 1000=25.32% 00:19:30.864 lat (msec) : 2000=33.90%, >=2000=14.21% 00:19:30.864 cpu : usr=0.02%, sys=1.24%, ctx=1340, majf=0, minf=32769 00:19:30.864 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.3%, 32=4.5%, >=64=91.1% 00:19:30.864 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.864 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:19:30.864 issued rwts: total=711,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.864 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:30.864 job4: (groupid=0, jobs=1): err= 0: pid=2877069: Mon Jul 15 14:54:02 2024 00:19:30.864 read: IOPS=28, BW=28.5MiB/s (29.9MB/s)(290MiB/10162msec) 00:19:30.864 slat (usec): min=42, max=2084.4k, avg=34600.03, stdev=168149.16 00:19:30.864 clat (msec): min=126, max=7734, avg=2736.32, stdev=1864.42 00:19:30.864 lat (msec): min=185, max=7754, avg=2770.92, stdev=1881.98 00:19:30.864 clat percentiles (msec): 00:19:30.864 | 1.00th=[ 232], 5.00th=[ 485], 10.00th=[ 919], 20.00th=[ 1670], 00:19:30.864 | 30.00th=[ 1955], 40.00th=[ 2106], 50.00th=[ 2165], 60.00th=[ 2232], 00:19:30.864 | 70.00th=[ 2433], 80.00th=[ 4044], 90.00th=[ 6812], 95.00th=[ 7080], 00:19:30.864 | 99.00th=[ 7617], 99.50th=[ 7684], 99.90th=[ 7752], 99.95th=[ 7752], 00:19:30.864 | 99.99th=[ 7752] 00:19:30.864 bw ( KiB/s): min=18432, max=108544, per=1.41%, avg=47396.57, stdev=30363.60, samples=7 00:19:30.864 iops : min= 18, max= 106, avg=46.29, stdev=29.65, samples=7 00:19:30.864 lat (msec) : 250=1.38%, 500=4.48%, 750=2.76%, 1000=1.38%, 2000=22.41% 00:19:30.864 lat (msec) : >=2000=67.59% 00:19:30.864 cpu : usr=0.01%, sys=0.93%, ctx=881, majf=0, minf=32769 00:19:30.864 IO depths : 1=0.3%, 2=0.7%, 4=1.4%, 8=2.8%, 16=5.5%, 32=11.0%, >=64=78.3% 00:19:30.864 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.865 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:19:30.865 issued rwts: total=290,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.865 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:30.865 job4: (groupid=0, jobs=1): err= 0: pid=2877070: Mon Jul 15 14:54:02 2024 00:19:30.865 read: IOPS=41, BW=41.7MiB/s (43.7MB/s)(421MiB/10096msec) 00:19:30.865 slat (usec): min=69, max=2137.7k, avg=23789.53, stdev=120086.04 00:19:30.865 clat (msec): min=78, max=4248, avg=2512.04, stdev=1144.63 00:19:30.865 lat (msec): min=134, max=4250, avg=2535.83, stdev=1145.17 00:19:30.865 clat percentiles (msec): 00:19:30.865 | 1.00th=[ 222], 5.00th=[ 768], 10.00th=[ 1083], 20.00th=[ 1586], 00:19:30.865 | 30.00th=[ 1972], 40.00th=[ 2039], 50.00th=[ 2165], 60.00th=[ 2299], 00:19:30.865 | 70.00th=[ 3876], 80.00th=[ 3977], 90.00th=[ 4044], 95.00th=[ 4111], 00:19:30.865 | 99.00th=[ 4245], 99.50th=[ 4245], 99.90th=[ 4245], 99.95th=[ 4245], 00:19:30.865 | 99.99th=[ 4245] 00:19:30.865 bw ( KiB/s): min= 2048, max=88064, per=1.49%, avg=50005.33, stdev=28289.89, samples=12 00:19:30.865 iops : min= 2, max= 86, avg=48.83, stdev=27.63, samples=12 00:19:30.865 lat (msec) : 100=0.24%, 250=1.43%, 500=1.19%, 750=1.90%, 1000=4.04% 00:19:30.865 lat (msec) : 2000=23.99%, >=2000=67.22% 00:19:30.865 cpu : usr=0.03%, sys=1.01%, ctx=1247, majf=0, minf=32769 00:19:30.865 IO depths : 1=0.2%, 2=0.5%, 4=1.0%, 8=1.9%, 16=3.8%, 32=7.6%, >=64=85.0% 00:19:30.865 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.865 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:19:30.865 issued rwts: total=421,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.865 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:30.865 job4: (groupid=0, jobs=1): err= 0: pid=2877071: Mon Jul 15 14:54:02 2024 00:19:30.865 read: IOPS=67, BW=67.1MiB/s (70.4MB/s)(677MiB/10090msec) 00:19:30.865 slat (usec): min=43, max=1176.7k, avg=14780.35, stdev=55640.43 00:19:30.865 clat (msec): min=79, max=3788, avg=1642.91, stdev=640.87 00:19:30.865 lat (msec): min=136, max=4444, avg=1657.69, stdev=642.98 00:19:30.865 clat percentiles (msec): 00:19:30.865 | 1.00th=[ 243], 5.00th=[ 684], 10.00th=[ 743], 20.00th=[ 793], 00:19:30.865 | 30.00th=[ 1334], 40.00th=[ 1636], 50.00th=[ 1804], 60.00th=[ 1888], 00:19:30.865 | 70.00th=[ 1989], 80.00th=[ 2165], 90.00th=[ 2400], 95.00th=[ 2567], 00:19:30.865 | 99.00th=[ 2702], 99.50th=[ 3742], 99.90th=[ 3775], 99.95th=[ 3775], 00:19:30.865 | 99.99th=[ 3775] 00:19:30.865 bw ( KiB/s): min=14336, max=159744, per=2.10%, avg=70272.00, stdev=39787.59, samples=16 00:19:30.865 iops : min= 14, max= 156, avg=68.62, stdev=38.86, samples=16 00:19:30.865 lat (msec) : 100=0.15%, 250=0.89%, 500=1.18%, 750=9.90%, 1000=11.52% 00:19:30.865 lat (msec) : 2000=47.86%, >=2000=28.51% 00:19:30.865 cpu : usr=0.07%, sys=1.42%, ctx=1350, majf=0, minf=32769 00:19:30.865 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.4%, 32=4.7%, >=64=90.7% 00:19:30.865 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.865 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:19:30.865 issued rwts: total=677,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.865 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:30.865 job4: (groupid=0, jobs=1): err= 0: pid=2877072: Mon Jul 15 14:54:02 2024 00:19:30.865 read: IOPS=70, BW=70.8MiB/s (74.2MB/s)(716MiB/10114msec) 00:19:30.865 slat (usec): min=37, max=2041.8k, avg=13962.35, stdev=77463.12 00:19:30.865 clat (msec): min=112, max=6286, avg=1721.01, stdev=1058.26 00:19:30.865 lat (msec): min=155, max=6329, avg=1734.97, stdev=1068.10 00:19:30.865 clat percentiles (msec): 00:19:30.865 | 1.00th=[ 363], 5.00th=[ 600], 10.00th=[ 667], 20.00th=[ 760], 00:19:30.865 | 30.00th=[ 986], 40.00th=[ 1301], 50.00th=[ 1485], 60.00th=[ 1586], 00:19:30.865 | 70.00th=[ 1955], 80.00th=[ 2232], 90.00th=[ 3675], 95.00th=[ 3742], 00:19:30.865 | 99.00th=[ 4279], 99.50th=[ 4396], 99.90th=[ 6275], 99.95th=[ 6275], 00:19:30.865 | 99.99th=[ 6275] 00:19:30.865 bw ( KiB/s): min=24576, max=194560, per=2.25%, avg=75392.00, stdev=41247.21, samples=16 00:19:30.865 iops : min= 24, max= 190, avg=73.62, stdev=40.28, samples=16 00:19:30.865 lat (msec) : 250=0.70%, 500=1.26%, 750=16.62%, 1000=11.59%, 2000=40.64% 00:19:30.865 lat (msec) : >=2000=29.19% 00:19:30.865 cpu : usr=0.02%, sys=1.52%, ctx=1472, majf=0, minf=32769 00:19:30.865 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.2%, 32=4.5%, >=64=91.2% 00:19:30.865 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.865 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:19:30.865 issued rwts: total=716,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.865 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:30.865 job4: (groupid=0, jobs=1): err= 0: pid=2877073: Mon Jul 15 14:54:02 2024 00:19:30.865 read: IOPS=39, BW=39.4MiB/s (41.3MB/s)(475MiB/12063msec) 00:19:30.865 slat (usec): min=39, max=2138.3k, avg=25241.94, stdev=195883.49 00:19:30.865 clat (msec): min=69, max=8617, avg=1235.57, stdev=1306.76 00:19:30.865 lat (msec): min=390, max=8691, avg=1260.81, stdev=1350.75 00:19:30.865 clat percentiles (msec): 00:19:30.865 | 1.00th=[ 393], 5.00th=[ 393], 10.00th=[ 397], 20.00th=[ 397], 00:19:30.865 | 30.00th=[ 401], 40.00th=[ 409], 50.00th=[ 625], 60.00th=[ 894], 00:19:30.865 | 70.00th=[ 2165], 80.00th=[ 2333], 90.00th=[ 2467], 95.00th=[ 2534], 00:19:30.865 | 99.00th=[ 7416], 99.50th=[ 8658], 99.90th=[ 8658], 99.95th=[ 8658], 00:19:30.865 | 99.99th=[ 8658] 00:19:30.865 bw ( KiB/s): min=135168, max=325632, per=7.06%, avg=236553.33, stdev=95826.53, samples=3 00:19:30.865 iops : min= 132, max= 318, avg=231.00, stdev=93.58, samples=3 00:19:30.865 lat (msec) : 100=0.21%, 500=45.68%, 750=8.63%, 1000=12.00%, 2000=2.95% 00:19:30.865 lat (msec) : >=2000=30.53% 00:19:30.865 cpu : usr=0.02%, sys=0.90%, ctx=675, majf=0, minf=32769 00:19:30.865 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.7%, 16=3.4%, 32=6.7%, >=64=86.7% 00:19:30.865 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.865 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:19:30.865 issued rwts: total=475,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.865 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:30.865 job4: (groupid=0, jobs=1): err= 0: pid=2877074: Mon Jul 15 14:54:02 2024 00:19:30.865 read: IOPS=51, BW=51.9MiB/s (54.4MB/s)(524MiB/10099msec) 00:19:30.865 slat (usec): min=49, max=1488.2k, avg=19078.58, stdev=74333.69 00:19:30.865 clat (msec): min=98, max=5265, avg=2267.10, stdev=1423.87 00:19:30.865 lat (msec): min=106, max=5555, avg=2286.18, stdev=1428.32 00:19:30.865 clat percentiles (msec): 00:19:30.865 | 1.00th=[ 124], 5.00th=[ 456], 10.00th=[ 1011], 20.00th=[ 1133], 00:19:30.865 | 30.00th=[ 1250], 40.00th=[ 1368], 50.00th=[ 1972], 60.00th=[ 2299], 00:19:30.865 | 70.00th=[ 2467], 80.00th=[ 4044], 90.00th=[ 4799], 95.00th=[ 4933], 00:19:30.865 | 99.00th=[ 5134], 99.50th=[ 5269], 99.90th=[ 5269], 99.95th=[ 5269], 00:19:30.865 | 99.99th=[ 5269] 00:19:30.865 bw ( KiB/s): min= 2048, max=133120, per=1.52%, avg=50816.00, stdev=36130.18, samples=16 00:19:30.865 iops : min= 2, max= 130, avg=49.62, stdev=35.28, samples=16 00:19:30.865 lat (msec) : 100=0.19%, 250=2.48%, 500=2.86%, 750=2.29%, 1000=1.34% 00:19:30.865 lat (msec) : 2000=41.98%, >=2000=48.85% 00:19:30.865 cpu : usr=0.05%, sys=1.20%, ctx=1568, majf=0, minf=32769 00:19:30.865 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.5%, 16=3.1%, 32=6.1%, >=64=88.0% 00:19:30.865 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.865 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:19:30.865 issued rwts: total=524,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.865 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:30.865 job4: (groupid=0, jobs=1): err= 0: pid=2877075: Mon Jul 15 14:54:02 2024 00:19:30.865 read: IOPS=32, BW=32.5MiB/s (34.0MB/s)(327MiB/10076msec) 00:19:30.865 slat (usec): min=536, max=2075.9k, avg=30589.12, stdev=173377.77 00:19:30.865 clat (msec): min=71, max=6661, avg=3118.93, stdev=2374.41 00:19:30.865 lat (msec): min=81, max=6662, avg=3149.52, stdev=2375.23 00:19:30.865 clat percentiles (msec): 00:19:30.865 | 1.00th=[ 86], 5.00th=[ 284], 10.00th=[ 625], 20.00th=[ 1183], 00:19:30.865 | 30.00th=[ 1267], 40.00th=[ 1351], 50.00th=[ 1737], 60.00th=[ 3205], 00:19:30.865 | 70.00th=[ 5537], 80.00th=[ 6074], 90.00th=[ 6477], 95.00th=[ 6611], 00:19:30.865 | 99.00th=[ 6611], 99.50th=[ 6678], 99.90th=[ 6678], 99.95th=[ 6678], 00:19:30.865 | 99.99th=[ 6678] 00:19:30.865 bw ( KiB/s): min=12288, max=116736, per=1.36%, avg=45511.11, stdev=30784.41, samples=9 00:19:30.865 iops : min= 12, max= 114, avg=44.44, stdev=30.06, samples=9 00:19:30.865 lat (msec) : 100=1.83%, 250=2.75%, 500=3.36%, 750=3.98%, 1000=2.75% 00:19:30.865 lat (msec) : 2000=39.45%, >=2000=45.87% 00:19:30.865 cpu : usr=0.00%, sys=0.85%, ctx=862, majf=0, minf=32769 00:19:30.865 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.4%, 16=4.9%, 32=9.8%, >=64=80.7% 00:19:30.865 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.865 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:19:30.865 issued rwts: total=327,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.865 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:30.865 job4: (groupid=0, jobs=1): err= 0: pid=2877076: Mon Jul 15 14:54:02 2024 00:19:30.865 read: IOPS=23, BW=23.5MiB/s (24.7MB/s)(237MiB/10080msec) 00:19:30.865 slat (usec): min=403, max=2069.7k, avg=42194.98, stdev=241577.79 00:19:30.865 clat (msec): min=78, max=8324, avg=2238.26, stdev=2266.84 00:19:30.865 lat (msec): min=82, max=8329, avg=2280.46, stdev=2296.37 00:19:30.865 clat percentiles (msec): 00:19:30.865 | 1.00th=[ 90], 5.00th=[ 284], 10.00th=[ 481], 20.00th=[ 911], 00:19:30.865 | 30.00th=[ 1183], 40.00th=[ 1401], 50.00th=[ 1569], 60.00th=[ 1670], 00:19:30.865 | 70.00th=[ 1804], 80.00th=[ 1871], 90.00th=[ 7013], 95.00th=[ 8288], 00:19:30.865 | 99.00th=[ 8288], 99.50th=[ 8288], 99.90th=[ 8356], 99.95th=[ 8356], 00:19:30.865 | 99.99th=[ 8356] 00:19:30.865 bw ( KiB/s): min=47104, max=69632, per=1.68%, avg=56320.00, stdev=10375.64, samples=4 00:19:30.865 iops : min= 46, max= 68, avg=55.00, stdev=10.13, samples=4 00:19:30.865 lat (msec) : 100=2.11%, 250=2.11%, 500=6.33%, 750=6.75%, 1000=5.91% 00:19:30.865 lat (msec) : 2000=57.38%, >=2000=19.41% 00:19:30.865 cpu : usr=0.00%, sys=0.84%, ctx=571, majf=0, minf=32769 00:19:30.865 IO depths : 1=0.4%, 2=0.8%, 4=1.7%, 8=3.4%, 16=6.8%, 32=13.5%, >=64=73.4% 00:19:30.865 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.865 complete : 0=0.0%, 4=99.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.9% 00:19:30.865 issued rwts: total=237,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.865 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:30.865 job5: (groupid=0, jobs=1): err= 0: pid=2877077: Mon Jul 15 14:54:02 2024 00:19:30.865 read: IOPS=32, BW=32.3MiB/s (33.9MB/s)(326MiB/10093msec) 00:19:30.865 slat (usec): min=61, max=2073.6k, avg=30729.01, stdev=174288.31 00:19:30.865 clat (msec): min=74, max=8040, avg=2341.99, stdev=1410.13 00:19:30.865 lat (msec): min=106, max=8126, avg=2372.72, stdev=1438.01 00:19:30.865 clat percentiles (msec): 00:19:30.865 | 1.00th=[ 136], 5.00th=[ 313], 10.00th=[ 527], 20.00th=[ 1099], 00:19:30.865 | 30.00th=[ 1435], 40.00th=[ 1670], 50.00th=[ 2022], 60.00th=[ 3239], 00:19:30.865 | 70.00th=[ 3306], 80.00th=[ 3574], 90.00th=[ 4010], 95.00th=[ 4144], 00:19:30.865 | 99.00th=[ 6007], 99.50th=[ 6074], 99.90th=[ 8020], 99.95th=[ 8020], 00:19:30.865 | 99.99th=[ 8020] 00:19:30.865 bw ( KiB/s): min=34816, max=75776, per=1.73%, avg=57929.14, stdev=14515.99, samples=7 00:19:30.866 iops : min= 34, max= 74, avg=56.57, stdev=14.18, samples=7 00:19:30.866 lat (msec) : 100=0.31%, 250=3.68%, 500=5.52%, 750=4.60%, 1000=4.91% 00:19:30.866 lat (msec) : 2000=28.22%, >=2000=52.76% 00:19:30.866 cpu : usr=0.00%, sys=0.83%, ctx=831, majf=0, minf=32769 00:19:30.866 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.5%, 16=4.9%, 32=9.8%, >=64=80.7% 00:19:30.866 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.866 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:19:30.866 issued rwts: total=326,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.866 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:30.866 job5: (groupid=0, jobs=1): err= 0: pid=2877078: Mon Jul 15 14:54:02 2024 00:19:30.866 read: IOPS=16, BW=16.4MiB/s (17.2MB/s)(166MiB/10123msec) 00:19:30.866 slat (usec): min=672, max=2173.4k, avg=60260.13, stdev=296325.61 00:19:30.866 clat (msec): min=118, max=10025, avg=4082.31, stdev=3856.15 00:19:30.866 lat (msec): min=123, max=10027, avg=4142.57, stdev=3871.90 00:19:30.866 clat percentiles (msec): 00:19:30.866 | 1.00th=[ 124], 5.00th=[ 271], 10.00th=[ 376], 20.00th=[ 953], 00:19:30.866 | 30.00th=[ 1150], 40.00th=[ 1485], 50.00th=[ 1905], 60.00th=[ 2165], 00:19:30.866 | 70.00th=[ 8658], 80.00th=[ 9194], 90.00th=[ 9731], 95.00th=[ 9866], 00:19:30.866 | 99.00th=[10000], 99.50th=[10000], 99.90th=[10000], 99.95th=[10000], 00:19:30.866 | 99.99th=[10000] 00:19:30.866 bw ( KiB/s): min=36864, max=43008, per=1.19%, avg=39936.00, stdev=4344.46, samples=2 00:19:30.866 iops : min= 36, max= 42, avg=39.00, stdev= 4.24, samples=2 00:19:30.866 lat (msec) : 250=4.82%, 500=7.23%, 750=3.01%, 1000=6.63%, 2000=29.52% 00:19:30.866 lat (msec) : >=2000=48.80% 00:19:30.866 cpu : usr=0.00%, sys=0.86%, ctx=578, majf=0, minf=32769 00:19:30.866 IO depths : 1=0.6%, 2=1.2%, 4=2.4%, 8=4.8%, 16=9.6%, 32=19.3%, >=64=62.0% 00:19:30.866 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.866 complete : 0=0.0%, 4=97.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=2.5% 00:19:30.866 issued rwts: total=166,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.866 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:30.866 job5: (groupid=0, jobs=1): err= 0: pid=2877079: Mon Jul 15 14:54:02 2024 00:19:30.866 read: IOPS=104, BW=105MiB/s (110MB/s)(1057MiB/10087msec) 00:19:30.866 slat (usec): min=41, max=2089.5k, avg=9515.82, stdev=91475.83 00:19:30.866 clat (msec): min=24, max=5691, avg=1173.31, stdev=1659.16 00:19:30.866 lat (msec): min=98, max=5692, avg=1182.83, stdev=1665.11 00:19:30.866 clat percentiles (msec): 00:19:30.866 | 1.00th=[ 201], 5.00th=[ 247], 10.00th=[ 259], 20.00th=[ 262], 00:19:30.866 | 30.00th=[ 266], 40.00th=[ 275], 50.00th=[ 414], 60.00th=[ 584], 00:19:30.866 | 70.00th=[ 818], 80.00th=[ 1552], 90.00th=[ 5403], 95.00th=[ 5537], 00:19:30.866 | 99.00th=[ 5671], 99.50th=[ 5671], 99.90th=[ 5671], 99.95th=[ 5671], 00:19:30.866 | 99.99th=[ 5671] 00:19:30.866 bw ( KiB/s): min= 4087, max=491520, per=4.73%, avg=158548.58, stdev=166618.00, samples=12 00:19:30.866 iops : min= 3, max= 480, avg=154.75, stdev=162.80, samples=12 00:19:30.866 lat (msec) : 50=0.09%, 100=0.09%, 250=6.34%, 500=49.39%, 750=11.26% 00:19:30.866 lat (msec) : 1000=6.34%, 2000=12.58%, >=2000=13.91% 00:19:30.866 cpu : usr=0.04%, sys=1.49%, ctx=1536, majf=0, minf=32769 00:19:30.866 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.5%, 32=3.0%, >=64=94.0% 00:19:30.866 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.866 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:30.866 issued rwts: total=1057,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.866 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:30.866 job5: (groupid=0, jobs=1): err= 0: pid=2877080: Mon Jul 15 14:54:02 2024 00:19:30.866 read: IOPS=235, BW=236MiB/s (247MB/s)(2375MiB/10070msec) 00:19:30.866 slat (usec): min=29, max=2014.7k, avg=4219.32, stdev=48627.31 00:19:30.866 clat (msec): min=38, max=2579, avg=471.33, stdev=514.97 00:19:30.866 lat (msec): min=120, max=2588, avg=475.55, stdev=517.41 00:19:30.866 clat percentiles (msec): 00:19:30.866 | 1.00th=[ 122], 5.00th=[ 174], 10.00th=[ 230], 20.00th=[ 264], 00:19:30.866 | 30.00th=[ 266], 40.00th=[ 342], 50.00th=[ 393], 60.00th=[ 393], 00:19:30.866 | 70.00th=[ 397], 80.00th=[ 422], 90.00th=[ 502], 95.00th=[ 2467], 00:19:30.866 | 99.00th=[ 2567], 99.50th=[ 2567], 99.90th=[ 2567], 99.95th=[ 2567], 00:19:30.866 | 99.99th=[ 2567] 00:19:30.866 bw ( KiB/s): min= 8192, max=643072, per=9.81%, avg=328704.00, stdev=148130.02, samples=14 00:19:30.866 iops : min= 8, max= 628, avg=321.00, stdev=144.66, samples=14 00:19:30.866 lat (msec) : 50=0.04%, 250=11.83%, 500=78.11%, 750=3.62%, 2000=1.05% 00:19:30.866 lat (msec) : >=2000=5.35% 00:19:30.866 cpu : usr=0.14%, sys=2.43%, ctx=2169, majf=0, minf=32769 00:19:30.866 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.3%, >=64=97.3% 00:19:30.866 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.866 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:30.866 issued rwts: total=2375,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.866 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:30.866 job5: (groupid=0, jobs=1): err= 0: pid=2877081: Mon Jul 15 14:54:02 2024 00:19:30.866 read: IOPS=27, BW=27.4MiB/s (28.7MB/s)(277MiB/10123msec) 00:19:30.866 slat (usec): min=48, max=2073.1k, avg=36450.36, stdev=196167.33 00:19:30.866 clat (msec): min=24, max=8299, avg=3308.63, stdev=2518.25 00:19:30.866 lat (msec): min=125, max=8325, avg=3345.08, stdev=2535.07 00:19:30.866 clat percentiles (msec): 00:19:30.866 | 1.00th=[ 133], 5.00th=[ 313], 10.00th=[ 485], 20.00th=[ 827], 00:19:30.866 | 30.00th=[ 1217], 40.00th=[ 1787], 50.00th=[ 2089], 60.00th=[ 4732], 00:19:30.866 | 70.00th=[ 4799], 80.00th=[ 4866], 90.00th=[ 7819], 95.00th=[ 7953], 00:19:30.866 | 99.00th=[ 8221], 99.50th=[ 8221], 99.90th=[ 8288], 99.95th=[ 8288], 00:19:30.866 | 99.99th=[ 8288] 00:19:30.866 bw ( KiB/s): min=14336, max=79872, per=1.52%, avg=50858.67, stdev=24384.68, samples=6 00:19:30.866 iops : min= 14, max= 78, avg=49.67, stdev=23.81, samples=6 00:19:30.866 lat (msec) : 50=0.36%, 250=1.81%, 500=9.39%, 750=7.94%, 1000=4.69% 00:19:30.866 lat (msec) : 2000=20.94%, >=2000=54.87% 00:19:30.866 cpu : usr=0.00%, sys=0.93%, ctx=740, majf=0, minf=32769 00:19:30.866 IO depths : 1=0.4%, 2=0.7%, 4=1.4%, 8=2.9%, 16=5.8%, 32=11.6%, >=64=77.3% 00:19:30.866 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.866 complete : 0=0.0%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.7% 00:19:30.866 issued rwts: total=277,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.866 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:30.866 job5: (groupid=0, jobs=1): err= 0: pid=2877082: Mon Jul 15 14:54:02 2024 00:19:30.866 read: IOPS=137, BW=137MiB/s (144MB/s)(1380MiB/10045msec) 00:19:30.866 slat (usec): min=37, max=2070.1k, avg=7248.97, stdev=57299.37 00:19:30.866 clat (msec): min=34, max=3208, avg=863.82, stdev=696.18 00:19:30.866 lat (msec): min=47, max=3209, avg=871.07, stdev=699.34 00:19:30.866 clat percentiles (msec): 00:19:30.866 | 1.00th=[ 95], 5.00th=[ 330], 10.00th=[ 493], 20.00th=[ 514], 00:19:30.866 | 30.00th=[ 609], 40.00th=[ 667], 50.00th=[ 709], 60.00th=[ 735], 00:19:30.866 | 70.00th=[ 751], 80.00th=[ 802], 90.00th=[ 1083], 95.00th=[ 2937], 00:19:30.866 | 99.00th=[ 3138], 99.50th=[ 3171], 99.90th=[ 3205], 99.95th=[ 3205], 00:19:30.866 | 99.99th=[ 3205] 00:19:30.866 bw ( KiB/s): min=38912, max=256000, per=4.93%, avg=165302.86, stdev=60286.65, samples=14 00:19:30.866 iops : min= 38, max= 250, avg=161.43, stdev=58.87, samples=14 00:19:30.866 lat (msec) : 50=0.14%, 100=0.94%, 250=2.32%, 500=11.23%, 750=55.43% 00:19:30.866 lat (msec) : 1000=19.28%, 2000=1.38%, >=2000=9.28% 00:19:30.866 cpu : usr=0.12%, sys=1.71%, ctx=1368, majf=0, minf=32769 00:19:30.866 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.3%, >=64=95.4% 00:19:30.866 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.866 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:30.866 issued rwts: total=1380,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.866 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:30.866 job5: (groupid=0, jobs=1): err= 0: pid=2877083: Mon Jul 15 14:54:02 2024 00:19:30.866 read: IOPS=70, BW=70.4MiB/s (73.9MB/s)(710MiB/10080msec) 00:19:30.866 slat (usec): min=42, max=1997.5k, avg=14088.29, stdev=82642.09 00:19:30.866 clat (msec): min=72, max=5840, avg=1729.54, stdev=1388.08 00:19:30.866 lat (msec): min=103, max=6701, avg=1743.63, stdev=1397.17 00:19:30.866 clat percentiles (msec): 00:19:30.866 | 1.00th=[ 178], 5.00th=[ 575], 10.00th=[ 642], 20.00th=[ 693], 00:19:30.866 | 30.00th=[ 760], 40.00th=[ 802], 50.00th=[ 844], 60.00th=[ 1569], 00:19:30.866 | 70.00th=[ 2165], 80.00th=[ 2903], 90.00th=[ 3876], 95.00th=[ 4245], 00:19:30.866 | 99.00th=[ 5470], 99.50th=[ 5470], 99.90th=[ 5873], 99.95th=[ 5873], 00:19:30.866 | 99.99th=[ 5873] 00:19:30.866 bw ( KiB/s): min= 2048, max=188416, per=2.23%, avg=74606.62, stdev=55178.60, samples=16 00:19:30.866 iops : min= 2, max= 184, avg=72.81, stdev=53.89, samples=16 00:19:30.866 lat (msec) : 100=0.14%, 250=1.41%, 500=2.39%, 750=24.51%, 1000=25.21% 00:19:30.866 lat (msec) : 2000=15.21%, >=2000=31.13% 00:19:30.866 cpu : usr=0.07%, sys=1.49%, ctx=1326, majf=0, minf=32769 00:19:30.866 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.3%, 32=4.5%, >=64=91.1% 00:19:30.866 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.866 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:19:30.866 issued rwts: total=710,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.866 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:30.866 job5: (groupid=0, jobs=1): err= 0: pid=2877084: Mon Jul 15 14:54:02 2024 00:19:30.866 read: IOPS=17, BW=17.5MiB/s (18.3MB/s)(177MiB/10120msec) 00:19:30.867 slat (usec): min=733, max=2159.0k, avg=56722.50, stdev=270533.83 00:19:30.867 clat (msec): min=79, max=9732, avg=3578.14, stdev=3647.46 00:19:30.867 lat (msec): min=142, max=9736, avg=3634.86, stdev=3670.05 00:19:30.867 clat percentiles (msec): 00:19:30.867 | 1.00th=[ 144], 5.00th=[ 271], 10.00th=[ 430], 20.00th=[ 793], 00:19:30.867 | 30.00th=[ 1083], 40.00th=[ 1284], 50.00th=[ 1435], 60.00th=[ 1888], 00:19:30.867 | 70.00th=[ 6208], 80.00th=[ 9194], 90.00th=[ 9463], 95.00th=[ 9597], 00:19:30.867 | 99.00th=[ 9731], 99.50th=[ 9731], 99.90th=[ 9731], 99.95th=[ 9731], 00:19:30.867 | 99.99th=[ 9731] 00:19:30.867 bw ( KiB/s): min=49152, max=51200, per=1.50%, avg=50176.00, stdev=1448.15, samples=2 00:19:30.867 iops : min= 48, max= 50, avg=49.00, stdev= 1.41, samples=2 00:19:30.867 lat (msec) : 100=0.56%, 250=3.95%, 500=6.78%, 750=7.91%, 1000=6.78% 00:19:30.867 lat (msec) : 2000=35.59%, >=2000=38.42% 00:19:30.867 cpu : usr=0.00%, sys=0.87%, ctx=623, majf=0, minf=32769 00:19:30.867 IO depths : 1=0.6%, 2=1.1%, 4=2.3%, 8=4.5%, 16=9.0%, 32=18.1%, >=64=64.4% 00:19:30.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.867 complete : 0=0.0%, 4=98.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=2.0% 00:19:30.867 issued rwts: total=177,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.867 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:30.867 job5: (groupid=0, jobs=1): err= 0: pid=2877085: Mon Jul 15 14:54:02 2024 00:19:30.867 read: IOPS=58, BW=58.5MiB/s (61.3MB/s)(587MiB/10036msec) 00:19:30.867 slat (usec): min=91, max=2106.1k, avg=17051.05, stdev=115233.45 00:19:30.867 clat (msec): min=24, max=4647, avg=1830.67, stdev=1245.97 00:19:30.867 lat (msec): min=125, max=4654, avg=1847.72, stdev=1251.88 00:19:30.867 clat percentiles (msec): 00:19:30.867 | 1.00th=[ 140], 5.00th=[ 264], 10.00th=[ 266], 20.00th=[ 266], 00:19:30.867 | 30.00th=[ 498], 40.00th=[ 1670], 50.00th=[ 2198], 60.00th=[ 2299], 00:19:30.867 | 70.00th=[ 2467], 80.00th=[ 3037], 90.00th=[ 3406], 95.00th=[ 3608], 00:19:30.867 | 99.00th=[ 4530], 99.50th=[ 4597], 99.90th=[ 4665], 99.95th=[ 4665], 00:19:30.867 | 99.99th=[ 4665] 00:19:30.867 bw ( KiB/s): min=12288, max=415744, per=3.12%, avg=104448.00, stdev=127918.05, samples=9 00:19:30.867 iops : min= 12, max= 406, avg=102.00, stdev=124.92, samples=9 00:19:30.867 lat (msec) : 50=0.17%, 250=1.53%, 500=28.45%, 750=1.70%, 1000=2.56% 00:19:30.867 lat (msec) : 2000=11.58%, >=2000=54.00% 00:19:30.867 cpu : usr=0.01%, sys=1.17%, ctx=1212, majf=0, minf=32769 00:19:30.867 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=2.7%, 32=5.5%, >=64=89.3% 00:19:30.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.867 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:19:30.867 issued rwts: total=587,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.867 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:30.867 job5: (groupid=0, jobs=1): err= 0: pid=2877086: Mon Jul 15 14:54:02 2024 00:19:30.867 read: IOPS=20, BW=20.4MiB/s (21.4MB/s)(205MiB/10031msec) 00:19:30.867 slat (usec): min=709, max=2080.8k, avg=48801.87, stdev=243972.65 00:19:30.867 clat (msec): min=25, max=8866, avg=4149.79, stdev=3416.82 00:19:30.867 lat (msec): min=144, max=8876, avg=4198.60, stdev=3426.69 00:19:30.867 clat percentiles (msec): 00:19:30.867 | 1.00th=[ 153], 5.00th=[ 342], 10.00th=[ 489], 20.00th=[ 785], 00:19:30.867 | 30.00th=[ 1351], 40.00th=[ 1754], 50.00th=[ 2265], 60.00th=[ 4245], 00:19:30.867 | 70.00th=[ 8356], 80.00th=[ 8658], 90.00th=[ 8792], 95.00th=[ 8792], 00:19:30.867 | 99.00th=[ 8792], 99.50th=[ 8792], 99.90th=[ 8926], 99.95th=[ 8926], 00:19:30.867 | 99.99th=[ 8926] 00:19:30.867 bw ( KiB/s): min=20480, max=65536, per=1.19%, avg=39936.00, stdev=19891.27, samples=4 00:19:30.867 iops : min= 20, max= 64, avg=39.00, stdev=19.43, samples=4 00:19:30.867 lat (msec) : 50=0.49%, 250=2.93%, 500=7.32%, 750=7.32%, 1000=8.29% 00:19:30.867 lat (msec) : 2000=19.02%, >=2000=54.63% 00:19:30.867 cpu : usr=0.00%, sys=0.85%, ctx=707, majf=0, minf=32769 00:19:30.867 IO depths : 1=0.5%, 2=1.0%, 4=2.0%, 8=3.9%, 16=7.8%, 32=15.6%, >=64=69.3% 00:19:30.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.867 complete : 0=0.0%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.3% 00:19:30.867 issued rwts: total=205,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.867 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:30.867 job5: (groupid=0, jobs=1): err= 0: pid=2877087: Mon Jul 15 14:54:02 2024 00:19:30.867 read: IOPS=41, BW=41.6MiB/s (43.6MB/s)(419MiB/10084msec) 00:19:30.867 slat (usec): min=44, max=2073.6k, avg=23866.02, stdev=158471.70 00:19:30.867 clat (msec): min=82, max=6211, avg=1569.69, stdev=1189.96 00:19:30.867 lat (msec): min=86, max=6213, avg=1593.56, stdev=1210.85 00:19:30.867 clat percentiles (msec): 00:19:30.867 | 1.00th=[ 96], 5.00th=[ 309], 10.00th=[ 592], 20.00th=[ 659], 00:19:30.867 | 30.00th=[ 676], 40.00th=[ 718], 50.00th=[ 1334], 60.00th=[ 1770], 00:19:30.867 | 70.00th=[ 2123], 80.00th=[ 2567], 90.00th=[ 2769], 95.00th=[ 2869], 00:19:30.867 | 99.00th=[ 6141], 99.50th=[ 6141], 99.90th=[ 6208], 99.95th=[ 6208], 00:19:30.867 | 99.99th=[ 6208] 00:19:30.867 bw ( KiB/s): min= 8192, max=188416, per=2.23%, avg=74752.00, stdev=58564.00, samples=8 00:19:30.867 iops : min= 8, max= 184, avg=73.00, stdev=57.19, samples=8 00:19:30.867 lat (msec) : 100=1.19%, 250=2.63%, 500=4.30%, 750=34.84%, 1000=2.86% 00:19:30.867 lat (msec) : 2000=20.53%, >=2000=33.65% 00:19:30.867 cpu : usr=0.01%, sys=0.86%, ctx=679, majf=0, minf=32769 00:19:30.867 IO depths : 1=0.2%, 2=0.5%, 4=1.0%, 8=1.9%, 16=3.8%, 32=7.6%, >=64=85.0% 00:19:30.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.867 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:19:30.867 issued rwts: total=419,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.867 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:30.867 job5: (groupid=0, jobs=1): err= 0: pid=2877088: Mon Jul 15 14:54:02 2024 00:19:30.867 read: IOPS=27, BW=27.7MiB/s (29.0MB/s)(280MiB/10123msec) 00:19:30.867 slat (usec): min=90, max=2072.0k, avg=35796.95, stdev=213232.40 00:19:30.867 clat (msec): min=97, max=8838, avg=4343.96, stdev=3596.57 00:19:30.867 lat (msec): min=190, max=8858, avg=4379.76, stdev=3600.29 00:19:30.867 clat percentiles (msec): 00:19:30.867 | 1.00th=[ 190], 5.00th=[ 192], 10.00th=[ 401], 20.00th=[ 743], 00:19:30.867 | 30.00th=[ 1217], 40.00th=[ 1687], 50.00th=[ 1854], 60.00th=[ 8020], 00:19:30.867 | 70.00th=[ 8087], 80.00th=[ 8288], 90.00th=[ 8658], 95.00th=[ 8792], 00:19:30.867 | 99.00th=[ 8792], 99.50th=[ 8792], 99.90th=[ 8792], 99.95th=[ 8792], 00:19:30.867 | 99.99th=[ 8792] 00:19:30.867 bw ( KiB/s): min= 2048, max=90112, per=1.33%, avg=44470.86, stdev=37028.88, samples=7 00:19:30.867 iops : min= 2, max= 88, avg=43.43, stdev=36.16, samples=7 00:19:30.867 lat (msec) : 100=0.36%, 250=6.43%, 500=6.43%, 750=6.79%, 1000=6.07% 00:19:30.867 lat (msec) : 2000=26.43%, >=2000=47.50% 00:19:30.867 cpu : usr=0.00%, sys=1.02%, ctx=676, majf=0, minf=32769 00:19:30.867 IO depths : 1=0.4%, 2=0.7%, 4=1.4%, 8=2.9%, 16=5.7%, 32=11.4%, >=64=77.5% 00:19:30.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.867 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:19:30.867 issued rwts: total=280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.867 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:30.867 job5: (groupid=0, jobs=1): err= 0: pid=2877089: Mon Jul 15 14:54:02 2024 00:19:30.867 read: IOPS=126, BW=127MiB/s (133MB/s)(1273MiB/10042msec) 00:19:30.867 slat (usec): min=38, max=2035.9k, avg=7851.42, stdev=58814.78 00:19:30.867 clat (msec): min=40, max=2918, avg=937.80, stdev=727.01 00:19:30.867 lat (msec): min=81, max=2923, avg=945.65, stdev=729.47 00:19:30.867 clat percentiles (msec): 00:19:30.867 | 1.00th=[ 192], 5.00th=[ 384], 10.00th=[ 418], 20.00th=[ 502], 00:19:30.867 | 30.00th=[ 510], 40.00th=[ 523], 50.00th=[ 592], 60.00th=[ 693], 00:19:30.867 | 70.00th=[ 827], 80.00th=[ 1469], 90.00th=[ 2165], 95.00th=[ 2702], 00:19:30.867 | 99.00th=[ 2903], 99.50th=[ 2903], 99.90th=[ 2903], 99.95th=[ 2903], 00:19:30.867 | 99.99th=[ 2903] 00:19:30.867 bw ( KiB/s): min=16384, max=294912, per=4.67%, avg=156467.20, stdev=94951.76, samples=15 00:19:30.867 iops : min= 16, max= 288, avg=152.80, stdev=92.73, samples=15 00:19:30.867 lat (msec) : 50=0.08%, 100=0.24%, 250=1.34%, 500=17.83%, 750=45.88% 00:19:30.867 lat (msec) : 1000=9.19%, 2000=13.51%, >=2000=11.94% 00:19:30.867 cpu : usr=0.04%, sys=1.65%, ctx=1551, majf=0, minf=32769 00:19:30.867 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.3%, 32=2.5%, >=64=95.1% 00:19:30.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.867 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:30.867 issued rwts: total=1273,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.867 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:30.867 00:19:30.867 Run status group 0 (all jobs): 00:19:30.867 READ: bw=3274MiB/s (3433MB/s), 1099KiB/s-255MiB/s (1126kB/s-268MB/s), io=38.9GiB (41.8GB), run=10031-12168msec 00:19:30.867 00:19:30.867 Disk stats (read/write): 00:19:30.867 nvme0n1: ios=45236/0, merge=0/0, ticks=7111315/0, in_queue=7111315, util=98.86% 00:19:30.867 nvme1n1: ios=47612/0, merge=0/0, ticks=4942490/0, in_queue=4942490, util=98.86% 00:19:30.867 nvme2n1: ios=38750/0, merge=0/0, ticks=6516810/0, in_queue=6516810, util=98.49% 00:19:30.867 nvme3n1: ios=49344/0, merge=0/0, ticks=6643045/0, in_queue=6643045, util=99.08% 00:19:30.867 nvme4n1: ios=62984/0, merge=0/0, ticks=6935907/0, in_queue=6935907, util=99.24% 00:19:30.867 nvme5n1: ios=72967/0, merge=0/0, ticks=7039784/0, in_queue=7039784, util=99.18% 00:19:30.867 14:54:02 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@38 -- # sync 00:19:30.867 14:54:02 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # seq 0 5 00:19:30.867 14:54:02 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:19:30.867 14:54:02 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode0 00:19:30.867 NQN:nqn.2016-06.io.spdk:cnode0 disconnected 1 controller(s) 00:19:30.867 14:54:03 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000000 00:19:30.867 14:54:03 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:19:30.867 14:54:03 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:30.867 14:54:03 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000000 00:19:30.867 14:54:03 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:30.867 14:54:03 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000000 00:19:30.867 14:54:03 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:19:30.867 14:54:03 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:30.867 14:54:03 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.867 14:54:03 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:30.867 14:54:03 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.867 14:54:03 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:19:30.867 14:54:03 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:30.867 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:30.867 14:54:04 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000001 00:19:30.867 14:54:04 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:19:30.867 14:54:04 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:30.868 14:54:04 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000001 00:19:30.868 14:54:04 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000001 00:19:30.868 14:54:04 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:30.868 14:54:04 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:19:30.868 14:54:04 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:30.868 14:54:04 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.868 14:54:04 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:30.868 14:54:04 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.868 14:54:04 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:19:30.868 14:54:04 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:19:31.798 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:19:31.798 14:54:05 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000002 00:19:31.798 14:54:05 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:19:31.798 14:54:05 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:31.798 14:54:05 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000002 00:19:31.798 14:54:05 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000002 00:19:31.798 14:54:05 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:31.798 14:54:05 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:19:31.798 14:54:05 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:19:31.798 14:54:05 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.798 14:54:05 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:31.798 14:54:05 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.798 14:54:05 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:19:31.798 14:54:05 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:19:32.729 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:19:32.729 14:54:06 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000003 00:19:32.729 14:54:06 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:19:32.729 14:54:06 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:32.729 14:54:06 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000003 00:19:32.729 14:54:06 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:32.729 14:54:06 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000003 00:19:32.729 14:54:06 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:19:32.729 14:54:06 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:19:32.729 14:54:06 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.729 14:54:06 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:32.729 14:54:06 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.729 14:54:06 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:19:32.729 14:54:06 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:19:33.659 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:19:33.659 14:54:07 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000004 00:19:33.659 14:54:07 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:19:33.916 14:54:07 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:33.916 14:54:07 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000004 00:19:33.916 14:54:07 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:33.916 14:54:07 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000004 00:19:33.916 14:54:07 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:19:33.916 14:54:07 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:19:33.916 14:54:07 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.916 14:54:07 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:33.916 14:54:07 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.916 14:54:07 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:19:33.916 14:54:07 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:19:34.845 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:19:34.845 14:54:08 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000005 00:19:34.845 14:54:08 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:19:34.845 14:54:08 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:34.845 14:54:08 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000005 00:19:34.845 14:54:08 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:34.845 14:54:08 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000005 00:19:34.845 14:54:08 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:19:34.845 14:54:08 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:19:34.845 14:54:08 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.845 14:54:08 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:34.845 14:54:08 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.845 14:54:08 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:19:34.845 14:54:08 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@48 -- # nvmftestfini 00:19:34.845 14:54:08 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:34.845 14:54:08 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # sync 00:19:34.845 14:54:08 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:19:34.845 14:54:08 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:19:34.845 14:54:08 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@120 -- # set +e 00:19:34.845 14:54:08 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:34.845 14:54:08 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:19:34.845 rmmod nvme_rdma 00:19:34.845 rmmod nvme_fabrics 00:19:34.845 14:54:08 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:34.845 14:54:08 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@124 -- # set -e 00:19:34.845 14:54:08 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@125 -- # return 0 00:19:34.845 14:54:08 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@489 -- # '[' -n 2875563 ']' 00:19:34.845 14:54:08 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@490 -- # killprocess 2875563 00:19:34.845 14:54:08 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@948 -- # '[' -z 2875563 ']' 00:19:34.845 14:54:08 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@952 -- # kill -0 2875563 00:19:34.845 14:54:08 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@953 -- # uname 00:19:34.845 14:54:08 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:34.845 14:54:08 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2875563 00:19:34.845 14:54:08 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:34.845 14:54:08 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:34.845 14:54:08 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2875563' 00:19:34.845 killing process with pid 2875563 00:19:34.845 14:54:08 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@967 -- # kill 2875563 00:19:34.845 14:54:08 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@972 -- # wait 2875563 00:19:35.411 14:54:09 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:35.411 14:54:09 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:19:35.411 00:19:35.411 real 0m31.733s 00:19:35.411 user 1m52.776s 00:19:35.411 sys 0m13.777s 00:19:35.411 14:54:09 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:35.411 14:54:09 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:35.411 ************************************ 00:19:35.411 END TEST nvmf_srq_overwhelm 00:19:35.411 ************************************ 00:19:35.411 14:54:09 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:19:35.411 14:54:09 nvmf_rdma -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:19:35.411 14:54:09 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:35.411 14:54:09 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:35.411 14:54:09 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:19:35.411 ************************************ 00:19:35.411 START TEST nvmf_shutdown 00:19:35.411 ************************************ 00:19:35.411 14:54:09 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:19:35.411 * Looking for test storage... 00:19:35.411 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:35.411 14:54:09 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:35.411 14:54:09 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:19:35.411 14:54:09 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:35.411 14:54:09 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:35.411 14:54:09 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:35.411 14:54:09 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:35.411 14:54:09 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:35.411 14:54:09 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:35.411 14:54:09 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:35.411 14:54:09 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:35.411 14:54:09 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:35.411 14:54:09 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:35.411 14:54:09 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:19:35.411 14:54:09 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:19:35.411 14:54:09 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:35.411 14:54:09 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:35.411 14:54:09 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:35.411 14:54:09 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:35.411 14:54:09 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:35.411 14:54:09 nvmf_rdma.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:35.411 14:54:09 nvmf_rdma.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:35.411 14:54:09 nvmf_rdma.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:35.411 14:54:09 nvmf_rdma.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.412 14:54:09 nvmf_rdma.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.412 14:54:09 nvmf_rdma.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.412 14:54:09 nvmf_rdma.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:19:35.412 14:54:09 nvmf_rdma.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.412 14:54:09 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:19:35.412 14:54:09 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:35.412 14:54:09 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:35.412 14:54:09 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:35.412 14:54:09 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:35.412 14:54:09 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:35.412 14:54:09 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:35.412 14:54:09 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:35.412 14:54:09 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:35.412 14:54:09 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:35.412 14:54:09 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:35.412 14:54:09 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:19:35.412 14:54:09 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:35.412 14:54:09 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:35.412 14:54:09 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:35.412 ************************************ 00:19:35.412 START TEST nvmf_shutdown_tc1 00:19:35.412 ************************************ 00:19:35.412 14:54:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:19:35.412 14:54:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:19:35.412 14:54:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:19:35.412 14:54:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:19:35.412 14:54:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:35.412 14:54:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:35.412 14:54:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:35.412 14:54:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:35.412 14:54:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:35.412 14:54:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:35.412 14:54:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:35.412 14:54:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:35.412 14:54:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:35.412 14:54:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:35.412 14:54:09 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:40.713 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:40.713 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:40.713 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:40.713 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:40.713 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:40.713 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:40.713 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:40.713 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:19:40.713 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:40.713 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:19:40.713 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:19:40.713 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:19:40.713 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:19:40.713 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:19:40.713 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:19:40.714 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:19:40.714 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:19:40.714 Found net devices under 0000:da:00.0: mlx_0_0 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:19:40.714 Found net devices under 0000:da:00.1: mlx_0_1 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # rdma_device_init 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # uname 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@63 -- # modprobe ib_core 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # allocate_nic_ips 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # continue 2 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # continue 2 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:19:40.714 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:40.714 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:19:40.714 altname enp218s0f0np0 00:19:40.714 altname ens818f0np0 00:19:40.714 inet 192.168.100.8/24 scope global mlx_0_0 00:19:40.714 valid_lft forever preferred_lft forever 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:19:40.714 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:40.714 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:19:40.714 altname enp218s0f1np1 00:19:40.714 altname ens818f1np1 00:19:40.714 inet 192.168.100.9/24 scope global mlx_0_1 00:19:40.714 valid_lft forever preferred_lft forever 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:40.714 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:19:40.715 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:19:40.715 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:19:40.715 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:40.715 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:40.715 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:40.715 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:40.715 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:40.715 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:40.715 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:40.715 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:40.715 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:40.715 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # continue 2 00:19:40.715 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:40.715 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:40.715 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:40.715 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:40.715 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:40.715 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:40.715 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # continue 2 00:19:40.715 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:19:40.715 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:19:40.715 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:40.715 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:40.715 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:40.715 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:40.715 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:19:40.715 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:19:40.715 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:40.715 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:40.715 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:40.715 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:40.715 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:19:40.715 192.168.100.9' 00:19:40.715 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:19:40.715 192.168.100.9' 00:19:40.715 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@457 -- # head -n 1 00:19:40.715 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:40.715 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:19:40.715 192.168.100.9' 00:19:40.715 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@458 -- # tail -n +2 00:19:40.715 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@458 -- # head -n 1 00:19:40.715 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:40.715 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:19:40.715 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:40.715 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:19:40.715 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:19:40.715 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:19:40.715 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:19:40.715 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:40.715 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:40.715 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:40.715 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=2883472 00:19:40.715 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 2883472 00:19:40.715 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 2883472 ']' 00:19:40.715 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:40.715 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:40.715 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:40.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:40.715 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:40.715 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:40.715 14:54:14 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:40.715 [2024-07-15 14:54:14.588534] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:19:40.715 [2024-07-15 14:54:14.588585] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:40.715 EAL: No free 2048 kB hugepages reported on node 1 00:19:40.974 [2024-07-15 14:54:14.644375] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:40.974 [2024-07-15 14:54:14.724418] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:40.974 [2024-07-15 14:54:14.724453] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:40.974 [2024-07-15 14:54:14.724460] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:40.974 [2024-07-15 14:54:14.724469] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:40.974 [2024-07-15 14:54:14.724474] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:40.974 [2024-07-15 14:54:14.724581] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:40.974 [2024-07-15 14:54:14.724667] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:40.974 [2024-07-15 14:54:14.724775] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:40.974 [2024-07-15 14:54:14.724776] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:41.540 14:54:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:41.540 14:54:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:19:41.540 14:54:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:41.540 14:54:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:41.540 14:54:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:41.540 14:54:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:41.540 14:54:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:41.540 14:54:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.540 14:54:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:41.540 [2024-07-15 14:54:15.451861] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1e57e10/0x1e5c300) succeed. 00:19:41.798 [2024-07-15 14:54:15.460969] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1e59400/0x1e9d990) succeed. 00:19:41.798 14:54:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.798 14:54:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:19:41.798 14:54:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:19:41.798 14:54:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:41.798 14:54:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:41.798 14:54:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:41.798 14:54:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:41.798 14:54:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:41.798 14:54:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:41.798 14:54:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:41.798 14:54:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:41.798 14:54:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:41.798 14:54:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:41.798 14:54:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:41.798 14:54:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:41.798 14:54:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:41.798 14:54:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:41.798 14:54:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:41.798 14:54:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:41.798 14:54:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:41.798 14:54:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:41.798 14:54:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:41.798 14:54:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:41.798 14:54:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:41.798 14:54:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:41.798 14:54:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:41.798 14:54:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:19:41.798 14:54:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.798 14:54:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:41.798 Malloc1 00:19:41.798 [2024-07-15 14:54:15.670386] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:41.798 Malloc2 00:19:42.056 Malloc3 00:19:42.056 Malloc4 00:19:42.056 Malloc5 00:19:42.056 Malloc6 00:19:42.056 Malloc7 00:19:42.056 Malloc8 00:19:42.314 Malloc9 00:19:42.314 Malloc10 00:19:42.314 14:54:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.314 14:54:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:19:42.314 14:54:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:42.314 14:54:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:42.314 14:54:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=2883828 00:19:42.314 14:54:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 2883828 /var/tmp/bdevperf.sock 00:19:42.314 14:54:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 2883828 ']' 00:19:42.314 14:54:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:42.314 14:54:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:19:42.314 14:54:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:42.314 14:54:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:42.314 14:54:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:42.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:42.314 14:54:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:19:42.314 14:54:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:42.314 14:54:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:19:42.314 14:54:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:42.314 14:54:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:42.314 14:54:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:42.314 { 00:19:42.314 "params": { 00:19:42.314 "name": "Nvme$subsystem", 00:19:42.314 "trtype": "$TEST_TRANSPORT", 00:19:42.314 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:42.314 "adrfam": "ipv4", 00:19:42.314 "trsvcid": "$NVMF_PORT", 00:19:42.314 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:42.314 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:42.314 "hdgst": ${hdgst:-false}, 00:19:42.315 "ddgst": ${ddgst:-false} 00:19:42.315 }, 00:19:42.315 "method": "bdev_nvme_attach_controller" 00:19:42.315 } 00:19:42.315 EOF 00:19:42.315 )") 00:19:42.315 14:54:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:42.315 14:54:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:42.315 14:54:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:42.315 { 00:19:42.315 "params": { 00:19:42.315 "name": "Nvme$subsystem", 00:19:42.315 "trtype": "$TEST_TRANSPORT", 00:19:42.315 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:42.315 "adrfam": "ipv4", 00:19:42.315 "trsvcid": "$NVMF_PORT", 00:19:42.315 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:42.315 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:42.315 "hdgst": ${hdgst:-false}, 00:19:42.315 "ddgst": ${ddgst:-false} 00:19:42.315 }, 00:19:42.315 "method": "bdev_nvme_attach_controller" 00:19:42.315 } 00:19:42.315 EOF 00:19:42.315 )") 00:19:42.315 14:54:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:42.315 14:54:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:42.315 14:54:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:42.315 { 00:19:42.315 "params": { 00:19:42.315 "name": "Nvme$subsystem", 00:19:42.315 "trtype": "$TEST_TRANSPORT", 00:19:42.315 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:42.315 "adrfam": "ipv4", 00:19:42.315 "trsvcid": "$NVMF_PORT", 00:19:42.315 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:42.315 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:42.315 "hdgst": ${hdgst:-false}, 00:19:42.315 "ddgst": ${ddgst:-false} 00:19:42.315 }, 00:19:42.315 "method": "bdev_nvme_attach_controller" 00:19:42.315 } 00:19:42.315 EOF 00:19:42.315 )") 00:19:42.315 14:54:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:42.315 14:54:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:42.315 14:54:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:42.315 { 00:19:42.315 "params": { 00:19:42.315 "name": "Nvme$subsystem", 00:19:42.315 "trtype": "$TEST_TRANSPORT", 00:19:42.315 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:42.315 "adrfam": "ipv4", 00:19:42.315 "trsvcid": "$NVMF_PORT", 00:19:42.315 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:42.315 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:42.315 "hdgst": ${hdgst:-false}, 00:19:42.315 "ddgst": ${ddgst:-false} 00:19:42.315 }, 00:19:42.315 "method": "bdev_nvme_attach_controller" 00:19:42.315 } 00:19:42.315 EOF 00:19:42.315 )") 00:19:42.315 14:54:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:42.315 14:54:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:42.315 14:54:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:42.315 { 00:19:42.315 "params": { 00:19:42.315 "name": "Nvme$subsystem", 00:19:42.315 "trtype": "$TEST_TRANSPORT", 00:19:42.315 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:42.315 "adrfam": "ipv4", 00:19:42.315 "trsvcid": "$NVMF_PORT", 00:19:42.315 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:42.315 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:42.315 "hdgst": ${hdgst:-false}, 00:19:42.315 "ddgst": ${ddgst:-false} 00:19:42.315 }, 00:19:42.315 "method": "bdev_nvme_attach_controller" 00:19:42.315 } 00:19:42.315 EOF 00:19:42.315 )") 00:19:42.315 14:54:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:42.315 14:54:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:42.315 14:54:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:42.315 { 00:19:42.315 "params": { 00:19:42.315 "name": "Nvme$subsystem", 00:19:42.315 "trtype": "$TEST_TRANSPORT", 00:19:42.315 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:42.315 "adrfam": "ipv4", 00:19:42.315 "trsvcid": "$NVMF_PORT", 00:19:42.315 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:42.315 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:42.315 "hdgst": ${hdgst:-false}, 00:19:42.315 "ddgst": ${ddgst:-false} 00:19:42.315 }, 00:19:42.315 "method": "bdev_nvme_attach_controller" 00:19:42.315 } 00:19:42.315 EOF 00:19:42.315 )") 00:19:42.315 14:54:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:42.315 14:54:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:42.315 14:54:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:42.315 { 00:19:42.315 "params": { 00:19:42.315 "name": "Nvme$subsystem", 00:19:42.315 "trtype": "$TEST_TRANSPORT", 00:19:42.315 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:42.315 "adrfam": "ipv4", 00:19:42.315 "trsvcid": "$NVMF_PORT", 00:19:42.315 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:42.315 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:42.315 "hdgst": ${hdgst:-false}, 00:19:42.315 "ddgst": ${ddgst:-false} 00:19:42.315 }, 00:19:42.315 "method": "bdev_nvme_attach_controller" 00:19:42.315 } 00:19:42.315 EOF 00:19:42.315 )") 00:19:42.315 [2024-07-15 14:54:16.145031] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:19:42.315 [2024-07-15 14:54:16.145079] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:19:42.315 14:54:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:42.315 14:54:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:42.315 14:54:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:42.315 { 00:19:42.315 "params": { 00:19:42.315 "name": "Nvme$subsystem", 00:19:42.315 "trtype": "$TEST_TRANSPORT", 00:19:42.315 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:42.315 "adrfam": "ipv4", 00:19:42.315 "trsvcid": "$NVMF_PORT", 00:19:42.315 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:42.315 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:42.315 "hdgst": ${hdgst:-false}, 00:19:42.315 "ddgst": ${ddgst:-false} 00:19:42.315 }, 00:19:42.315 "method": "bdev_nvme_attach_controller" 00:19:42.315 } 00:19:42.315 EOF 00:19:42.315 )") 00:19:42.315 14:54:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:42.315 14:54:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:42.315 14:54:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:42.315 { 00:19:42.315 "params": { 00:19:42.315 "name": "Nvme$subsystem", 00:19:42.315 "trtype": "$TEST_TRANSPORT", 00:19:42.315 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:42.315 "adrfam": "ipv4", 00:19:42.315 "trsvcid": "$NVMF_PORT", 00:19:42.315 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:42.315 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:42.315 "hdgst": ${hdgst:-false}, 00:19:42.315 "ddgst": ${ddgst:-false} 00:19:42.315 }, 00:19:42.315 "method": "bdev_nvme_attach_controller" 00:19:42.315 } 00:19:42.315 EOF 00:19:42.315 )") 00:19:42.315 14:54:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:42.315 14:54:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:42.315 14:54:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:42.315 { 00:19:42.315 "params": { 00:19:42.315 "name": "Nvme$subsystem", 00:19:42.315 "trtype": "$TEST_TRANSPORT", 00:19:42.315 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:42.315 "adrfam": "ipv4", 00:19:42.315 "trsvcid": "$NVMF_PORT", 00:19:42.315 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:42.315 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:42.315 "hdgst": ${hdgst:-false}, 00:19:42.315 "ddgst": ${ddgst:-false} 00:19:42.315 }, 00:19:42.315 "method": "bdev_nvme_attach_controller" 00:19:42.315 } 00:19:42.315 EOF 00:19:42.315 )") 00:19:42.315 14:54:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:42.315 EAL: No free 2048 kB hugepages reported on node 1 00:19:42.315 14:54:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:19:42.315 14:54:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:19:42.315 14:54:16 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:42.315 "params": { 00:19:42.315 "name": "Nvme1", 00:19:42.315 "trtype": "rdma", 00:19:42.315 "traddr": "192.168.100.8", 00:19:42.315 "adrfam": "ipv4", 00:19:42.315 "trsvcid": "4420", 00:19:42.315 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:42.315 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:42.315 "hdgst": false, 00:19:42.315 "ddgst": false 00:19:42.315 }, 00:19:42.315 "method": "bdev_nvme_attach_controller" 00:19:42.315 },{ 00:19:42.315 "params": { 00:19:42.316 "name": "Nvme2", 00:19:42.316 "trtype": "rdma", 00:19:42.316 "traddr": "192.168.100.8", 00:19:42.316 "adrfam": "ipv4", 00:19:42.316 "trsvcid": "4420", 00:19:42.316 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:42.316 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:42.316 "hdgst": false, 00:19:42.316 "ddgst": false 00:19:42.316 }, 00:19:42.316 "method": "bdev_nvme_attach_controller" 00:19:42.316 },{ 00:19:42.316 "params": { 00:19:42.316 "name": "Nvme3", 00:19:42.316 "trtype": "rdma", 00:19:42.316 "traddr": "192.168.100.8", 00:19:42.316 "adrfam": "ipv4", 00:19:42.316 "trsvcid": "4420", 00:19:42.316 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:42.316 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:42.316 "hdgst": false, 00:19:42.316 "ddgst": false 00:19:42.316 }, 00:19:42.316 "method": "bdev_nvme_attach_controller" 00:19:42.316 },{ 00:19:42.316 "params": { 00:19:42.316 "name": "Nvme4", 00:19:42.316 "trtype": "rdma", 00:19:42.316 "traddr": "192.168.100.8", 00:19:42.316 "adrfam": "ipv4", 00:19:42.316 "trsvcid": "4420", 00:19:42.316 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:42.316 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:42.316 "hdgst": false, 00:19:42.316 "ddgst": false 00:19:42.316 }, 00:19:42.316 "method": "bdev_nvme_attach_controller" 00:19:42.316 },{ 00:19:42.316 "params": { 00:19:42.316 "name": "Nvme5", 00:19:42.316 "trtype": "rdma", 00:19:42.316 "traddr": "192.168.100.8", 00:19:42.316 "adrfam": "ipv4", 00:19:42.316 "trsvcid": "4420", 00:19:42.316 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:42.316 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:42.316 "hdgst": false, 00:19:42.316 "ddgst": false 00:19:42.316 }, 00:19:42.316 "method": "bdev_nvme_attach_controller" 00:19:42.316 },{ 00:19:42.316 "params": { 00:19:42.316 "name": "Nvme6", 00:19:42.316 "trtype": "rdma", 00:19:42.316 "traddr": "192.168.100.8", 00:19:42.316 "adrfam": "ipv4", 00:19:42.316 "trsvcid": "4420", 00:19:42.316 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:42.316 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:42.316 "hdgst": false, 00:19:42.316 "ddgst": false 00:19:42.316 }, 00:19:42.316 "method": "bdev_nvme_attach_controller" 00:19:42.316 },{ 00:19:42.316 "params": { 00:19:42.316 "name": "Nvme7", 00:19:42.316 "trtype": "rdma", 00:19:42.316 "traddr": "192.168.100.8", 00:19:42.316 "adrfam": "ipv4", 00:19:42.316 "trsvcid": "4420", 00:19:42.316 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:42.316 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:42.316 "hdgst": false, 00:19:42.316 "ddgst": false 00:19:42.316 }, 00:19:42.316 "method": "bdev_nvme_attach_controller" 00:19:42.316 },{ 00:19:42.316 "params": { 00:19:42.316 "name": "Nvme8", 00:19:42.316 "trtype": "rdma", 00:19:42.316 "traddr": "192.168.100.8", 00:19:42.316 "adrfam": "ipv4", 00:19:42.316 "trsvcid": "4420", 00:19:42.316 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:42.316 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:42.316 "hdgst": false, 00:19:42.316 "ddgst": false 00:19:42.316 }, 00:19:42.316 "method": "bdev_nvme_attach_controller" 00:19:42.316 },{ 00:19:42.316 "params": { 00:19:42.316 "name": "Nvme9", 00:19:42.316 "trtype": "rdma", 00:19:42.316 "traddr": "192.168.100.8", 00:19:42.316 "adrfam": "ipv4", 00:19:42.316 "trsvcid": "4420", 00:19:42.316 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:42.316 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:42.316 "hdgst": false, 00:19:42.316 "ddgst": false 00:19:42.316 }, 00:19:42.316 "method": "bdev_nvme_attach_controller" 00:19:42.316 },{ 00:19:42.316 "params": { 00:19:42.316 "name": "Nvme10", 00:19:42.316 "trtype": "rdma", 00:19:42.316 "traddr": "192.168.100.8", 00:19:42.316 "adrfam": "ipv4", 00:19:42.316 "trsvcid": "4420", 00:19:42.316 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:42.316 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:42.316 "hdgst": false, 00:19:42.316 "ddgst": false 00:19:42.316 }, 00:19:42.316 "method": "bdev_nvme_attach_controller" 00:19:42.316 }' 00:19:42.316 [2024-07-15 14:54:16.203049] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:42.574 [2024-07-15 14:54:16.277114] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:43.505 14:54:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:43.505 14:54:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:19:43.505 14:54:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:43.505 14:54:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.505 14:54:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:43.505 14:54:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.505 14:54:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 2883828 00:19:43.505 14:54:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:19:43.505 14:54:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:19:44.436 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 2883828 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:19:44.437 14:54:18 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 2883472 00:19:44.437 14:54:18 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:19:44.437 14:54:18 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:44.437 14:54:18 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:19:44.437 14:54:18 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:19:44.437 14:54:18 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:44.437 14:54:18 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:44.437 { 00:19:44.437 "params": { 00:19:44.437 "name": "Nvme$subsystem", 00:19:44.437 "trtype": "$TEST_TRANSPORT", 00:19:44.437 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:44.437 "adrfam": "ipv4", 00:19:44.437 "trsvcid": "$NVMF_PORT", 00:19:44.437 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:44.437 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:44.437 "hdgst": ${hdgst:-false}, 00:19:44.437 "ddgst": ${ddgst:-false} 00:19:44.437 }, 00:19:44.437 "method": "bdev_nvme_attach_controller" 00:19:44.437 } 00:19:44.437 EOF 00:19:44.437 )") 00:19:44.437 14:54:18 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:44.437 14:54:18 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:44.437 14:54:18 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:44.437 { 00:19:44.437 "params": { 00:19:44.437 "name": "Nvme$subsystem", 00:19:44.437 "trtype": "$TEST_TRANSPORT", 00:19:44.437 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:44.437 "adrfam": "ipv4", 00:19:44.437 "trsvcid": "$NVMF_PORT", 00:19:44.437 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:44.437 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:44.437 "hdgst": ${hdgst:-false}, 00:19:44.437 "ddgst": ${ddgst:-false} 00:19:44.437 }, 00:19:44.437 "method": "bdev_nvme_attach_controller" 00:19:44.437 } 00:19:44.437 EOF 00:19:44.437 )") 00:19:44.437 14:54:18 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:44.437 14:54:18 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:44.437 14:54:18 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:44.437 { 00:19:44.437 "params": { 00:19:44.437 "name": "Nvme$subsystem", 00:19:44.437 "trtype": "$TEST_TRANSPORT", 00:19:44.437 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:44.437 "adrfam": "ipv4", 00:19:44.437 "trsvcid": "$NVMF_PORT", 00:19:44.437 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:44.437 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:44.437 "hdgst": ${hdgst:-false}, 00:19:44.437 "ddgst": ${ddgst:-false} 00:19:44.437 }, 00:19:44.437 "method": "bdev_nvme_attach_controller" 00:19:44.437 } 00:19:44.437 EOF 00:19:44.437 )") 00:19:44.437 14:54:18 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:44.437 14:54:18 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:44.437 14:54:18 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:44.437 { 00:19:44.437 "params": { 00:19:44.437 "name": "Nvme$subsystem", 00:19:44.437 "trtype": "$TEST_TRANSPORT", 00:19:44.437 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:44.437 "adrfam": "ipv4", 00:19:44.437 "trsvcid": "$NVMF_PORT", 00:19:44.437 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:44.437 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:44.437 "hdgst": ${hdgst:-false}, 00:19:44.437 "ddgst": ${ddgst:-false} 00:19:44.437 }, 00:19:44.437 "method": "bdev_nvme_attach_controller" 00:19:44.437 } 00:19:44.437 EOF 00:19:44.437 )") 00:19:44.437 14:54:18 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:44.437 14:54:18 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:44.437 14:54:18 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:44.437 { 00:19:44.437 "params": { 00:19:44.437 "name": "Nvme$subsystem", 00:19:44.437 "trtype": "$TEST_TRANSPORT", 00:19:44.437 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:44.437 "adrfam": "ipv4", 00:19:44.437 "trsvcid": "$NVMF_PORT", 00:19:44.437 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:44.437 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:44.437 "hdgst": ${hdgst:-false}, 00:19:44.437 "ddgst": ${ddgst:-false} 00:19:44.437 }, 00:19:44.437 "method": "bdev_nvme_attach_controller" 00:19:44.437 } 00:19:44.437 EOF 00:19:44.437 )") 00:19:44.437 14:54:18 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:44.437 14:54:18 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:44.437 14:54:18 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:44.437 { 00:19:44.437 "params": { 00:19:44.437 "name": "Nvme$subsystem", 00:19:44.437 "trtype": "$TEST_TRANSPORT", 00:19:44.437 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:44.437 "adrfam": "ipv4", 00:19:44.437 "trsvcid": "$NVMF_PORT", 00:19:44.437 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:44.437 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:44.437 "hdgst": ${hdgst:-false}, 00:19:44.437 "ddgst": ${ddgst:-false} 00:19:44.437 }, 00:19:44.437 "method": "bdev_nvme_attach_controller" 00:19:44.437 } 00:19:44.437 EOF 00:19:44.437 )") 00:19:44.437 14:54:18 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:44.437 14:54:18 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:44.437 14:54:18 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:44.437 { 00:19:44.437 "params": { 00:19:44.437 "name": "Nvme$subsystem", 00:19:44.437 "trtype": "$TEST_TRANSPORT", 00:19:44.437 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:44.437 "adrfam": "ipv4", 00:19:44.437 "trsvcid": "$NVMF_PORT", 00:19:44.437 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:44.437 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:44.437 "hdgst": ${hdgst:-false}, 00:19:44.437 "ddgst": ${ddgst:-false} 00:19:44.437 }, 00:19:44.437 "method": "bdev_nvme_attach_controller" 00:19:44.437 } 00:19:44.437 EOF 00:19:44.437 )") 00:19:44.437 14:54:18 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:44.437 [2024-07-15 14:54:18.184087] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:19:44.437 [2024-07-15 14:54:18.184135] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2884187 ] 00:19:44.437 14:54:18 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:44.437 14:54:18 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:44.437 { 00:19:44.437 "params": { 00:19:44.437 "name": "Nvme$subsystem", 00:19:44.437 "trtype": "$TEST_TRANSPORT", 00:19:44.437 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:44.437 "adrfam": "ipv4", 00:19:44.437 "trsvcid": "$NVMF_PORT", 00:19:44.437 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:44.437 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:44.437 "hdgst": ${hdgst:-false}, 00:19:44.437 "ddgst": ${ddgst:-false} 00:19:44.437 }, 00:19:44.437 "method": "bdev_nvme_attach_controller" 00:19:44.437 } 00:19:44.437 EOF 00:19:44.437 )") 00:19:44.437 14:54:18 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:44.437 14:54:18 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:44.437 14:54:18 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:44.437 { 00:19:44.437 "params": { 00:19:44.437 "name": "Nvme$subsystem", 00:19:44.437 "trtype": "$TEST_TRANSPORT", 00:19:44.437 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:44.437 "adrfam": "ipv4", 00:19:44.437 "trsvcid": "$NVMF_PORT", 00:19:44.437 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:44.437 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:44.437 "hdgst": ${hdgst:-false}, 00:19:44.437 "ddgst": ${ddgst:-false} 00:19:44.437 }, 00:19:44.437 "method": "bdev_nvme_attach_controller" 00:19:44.437 } 00:19:44.437 EOF 00:19:44.437 )") 00:19:44.437 14:54:18 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:44.437 14:54:18 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:44.438 14:54:18 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:44.438 { 00:19:44.438 "params": { 00:19:44.438 "name": "Nvme$subsystem", 00:19:44.438 "trtype": "$TEST_TRANSPORT", 00:19:44.438 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:44.438 "adrfam": "ipv4", 00:19:44.438 "trsvcid": "$NVMF_PORT", 00:19:44.438 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:44.438 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:44.438 "hdgst": ${hdgst:-false}, 00:19:44.438 "ddgst": ${ddgst:-false} 00:19:44.438 }, 00:19:44.438 "method": "bdev_nvme_attach_controller" 00:19:44.438 } 00:19:44.438 EOF 00:19:44.438 )") 00:19:44.438 14:54:18 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:44.438 14:54:18 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:19:44.438 EAL: No free 2048 kB hugepages reported on node 1 00:19:44.438 14:54:18 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:19:44.438 14:54:18 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:44.438 "params": { 00:19:44.438 "name": "Nvme1", 00:19:44.438 "trtype": "rdma", 00:19:44.438 "traddr": "192.168.100.8", 00:19:44.438 "adrfam": "ipv4", 00:19:44.438 "trsvcid": "4420", 00:19:44.438 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:44.438 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:44.438 "hdgst": false, 00:19:44.438 "ddgst": false 00:19:44.438 }, 00:19:44.438 "method": "bdev_nvme_attach_controller" 00:19:44.438 },{ 00:19:44.438 "params": { 00:19:44.438 "name": "Nvme2", 00:19:44.438 "trtype": "rdma", 00:19:44.438 "traddr": "192.168.100.8", 00:19:44.438 "adrfam": "ipv4", 00:19:44.438 "trsvcid": "4420", 00:19:44.438 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:44.438 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:44.438 "hdgst": false, 00:19:44.438 "ddgst": false 00:19:44.438 }, 00:19:44.438 "method": "bdev_nvme_attach_controller" 00:19:44.438 },{ 00:19:44.438 "params": { 00:19:44.438 "name": "Nvme3", 00:19:44.438 "trtype": "rdma", 00:19:44.438 "traddr": "192.168.100.8", 00:19:44.438 "adrfam": "ipv4", 00:19:44.438 "trsvcid": "4420", 00:19:44.438 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:44.438 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:44.438 "hdgst": false, 00:19:44.438 "ddgst": false 00:19:44.438 }, 00:19:44.438 "method": "bdev_nvme_attach_controller" 00:19:44.438 },{ 00:19:44.438 "params": { 00:19:44.438 "name": "Nvme4", 00:19:44.438 "trtype": "rdma", 00:19:44.438 "traddr": "192.168.100.8", 00:19:44.438 "adrfam": "ipv4", 00:19:44.438 "trsvcid": "4420", 00:19:44.438 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:44.438 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:44.438 "hdgst": false, 00:19:44.438 "ddgst": false 00:19:44.438 }, 00:19:44.438 "method": "bdev_nvme_attach_controller" 00:19:44.438 },{ 00:19:44.438 "params": { 00:19:44.438 "name": "Nvme5", 00:19:44.438 "trtype": "rdma", 00:19:44.438 "traddr": "192.168.100.8", 00:19:44.438 "adrfam": "ipv4", 00:19:44.438 "trsvcid": "4420", 00:19:44.438 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:44.438 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:44.438 "hdgst": false, 00:19:44.438 "ddgst": false 00:19:44.438 }, 00:19:44.438 "method": "bdev_nvme_attach_controller" 00:19:44.438 },{ 00:19:44.438 "params": { 00:19:44.438 "name": "Nvme6", 00:19:44.438 "trtype": "rdma", 00:19:44.438 "traddr": "192.168.100.8", 00:19:44.438 "adrfam": "ipv4", 00:19:44.438 "trsvcid": "4420", 00:19:44.438 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:44.438 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:44.438 "hdgst": false, 00:19:44.438 "ddgst": false 00:19:44.438 }, 00:19:44.438 "method": "bdev_nvme_attach_controller" 00:19:44.438 },{ 00:19:44.438 "params": { 00:19:44.438 "name": "Nvme7", 00:19:44.438 "trtype": "rdma", 00:19:44.438 "traddr": "192.168.100.8", 00:19:44.438 "adrfam": "ipv4", 00:19:44.438 "trsvcid": "4420", 00:19:44.438 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:44.438 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:44.438 "hdgst": false, 00:19:44.438 "ddgst": false 00:19:44.438 }, 00:19:44.438 "method": "bdev_nvme_attach_controller" 00:19:44.438 },{ 00:19:44.438 "params": { 00:19:44.438 "name": "Nvme8", 00:19:44.438 "trtype": "rdma", 00:19:44.438 "traddr": "192.168.100.8", 00:19:44.438 "adrfam": "ipv4", 00:19:44.438 "trsvcid": "4420", 00:19:44.438 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:44.438 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:44.438 "hdgst": false, 00:19:44.438 "ddgst": false 00:19:44.438 }, 00:19:44.438 "method": "bdev_nvme_attach_controller" 00:19:44.438 },{ 00:19:44.438 "params": { 00:19:44.438 "name": "Nvme9", 00:19:44.438 "trtype": "rdma", 00:19:44.438 "traddr": "192.168.100.8", 00:19:44.438 "adrfam": "ipv4", 00:19:44.438 "trsvcid": "4420", 00:19:44.438 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:44.438 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:44.438 "hdgst": false, 00:19:44.438 "ddgst": false 00:19:44.438 }, 00:19:44.438 "method": "bdev_nvme_attach_controller" 00:19:44.438 },{ 00:19:44.438 "params": { 00:19:44.438 "name": "Nvme10", 00:19:44.438 "trtype": "rdma", 00:19:44.438 "traddr": "192.168.100.8", 00:19:44.438 "adrfam": "ipv4", 00:19:44.438 "trsvcid": "4420", 00:19:44.438 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:44.438 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:44.438 "hdgst": false, 00:19:44.438 "ddgst": false 00:19:44.438 }, 00:19:44.438 "method": "bdev_nvme_attach_controller" 00:19:44.438 }' 00:19:44.438 [2024-07-15 14:54:18.241871] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:44.438 [2024-07-15 14:54:18.316801] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:45.371 Running I/O for 1 seconds... 00:19:46.756 00:19:46.757 Latency(us) 00:19:46.757 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:46.757 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:46.757 Verification LBA range: start 0x0 length 0x400 00:19:46.757 Nvme1n1 : 1.17 368.88 23.05 0.00 0.00 171070.96 6990.51 235679.94 00:19:46.757 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:46.757 Verification LBA range: start 0x0 length 0x400 00:19:46.757 Nvme2n1 : 1.17 382.98 23.94 0.00 0.00 162583.98 5679.79 166773.52 00:19:46.757 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:46.757 Verification LBA range: start 0x0 length 0x400 00:19:46.757 Nvme3n1 : 1.17 384.31 24.02 0.00 0.00 159726.65 7458.62 159783.01 00:19:46.757 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:46.757 Verification LBA range: start 0x0 length 0x400 00:19:46.757 Nvme4n1 : 1.17 383.93 24.00 0.00 0.00 157644.94 4618.73 152792.50 00:19:46.757 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:46.757 Verification LBA range: start 0x0 length 0x400 00:19:46.757 Nvme5n1 : 1.18 380.92 23.81 0.00 0.00 156993.76 10048.85 141807.42 00:19:46.757 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:46.757 Verification LBA range: start 0x0 length 0x400 00:19:46.757 Nvme6n1 : 1.18 380.51 23.78 0.00 0.00 154756.42 10548.18 132819.63 00:19:46.757 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:46.757 Verification LBA range: start 0x0 length 0x400 00:19:46.757 Nvme7n1 : 1.18 380.11 23.76 0.00 0.00 152649.07 10922.67 124830.48 00:19:46.757 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:46.757 Verification LBA range: start 0x0 length 0x400 00:19:46.757 Nvme8n1 : 1.18 379.71 23.73 0.00 0.00 150598.29 11297.16 116342.00 00:19:46.757 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:46.757 Verification LBA range: start 0x0 length 0x400 00:19:46.757 Nvme9n1 : 1.18 378.55 23.66 0.00 0.00 148928.75 2683.86 110350.14 00:19:46.757 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:46.757 Verification LBA range: start 0x0 length 0x400 00:19:46.757 Nvme10n1 : 1.17 328.22 20.51 0.00 0.00 169550.51 8113.98 177758.60 00:19:46.757 =================================================================================================================== 00:19:46.757 Total : 3748.12 234.26 0.00 0.00 158244.57 2683.86 235679.94 00:19:46.757 14:54:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:19:46.757 14:54:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:19:46.757 14:54:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:19:46.757 14:54:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:46.757 14:54:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:19:46.757 14:54:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:46.757 14:54:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:19:46.757 14:54:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:19:46.757 14:54:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:19:46.757 14:54:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:19:46.757 14:54:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:46.757 14:54:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:19:46.757 rmmod nvme_rdma 00:19:46.757 rmmod nvme_fabrics 00:19:47.014 14:54:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:47.014 14:54:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:19:47.014 14:54:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:19:47.014 14:54:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 2883472 ']' 00:19:47.014 14:54:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 2883472 00:19:47.014 14:54:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 2883472 ']' 00:19:47.014 14:54:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 2883472 00:19:47.014 14:54:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:19:47.014 14:54:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:47.014 14:54:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2883472 00:19:47.014 14:54:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:47.014 14:54:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:47.014 14:54:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2883472' 00:19:47.014 killing process with pid 2883472 00:19:47.014 14:54:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 2883472 00:19:47.014 14:54:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 2883472 00:19:47.271 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:47.271 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:19:47.271 00:19:47.271 real 0m11.903s 00:19:47.271 user 0m30.221s 00:19:47.271 sys 0m4.885s 00:19:47.271 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:47.271 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:47.271 ************************************ 00:19:47.271 END TEST nvmf_shutdown_tc1 00:19:47.271 ************************************ 00:19:47.529 14:54:21 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:19:47.529 14:54:21 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:19:47.529 14:54:21 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:47.529 14:54:21 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:47.529 14:54:21 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:47.529 ************************************ 00:19:47.529 START TEST nvmf_shutdown_tc2 00:19:47.529 ************************************ 00:19:47.529 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:19:47.529 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:19:47.529 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:19:47.529 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:19:47.529 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:47.529 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:47.529 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:47.529 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:47.529 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:47.529 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:47.529 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:47.529 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:47.529 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:47.529 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:47.529 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:47.529 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:47.529 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:47.529 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:47.529 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:47.529 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:47.529 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:47.529 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:47.529 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:19:47.529 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:47.529 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:19:47.529 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:19:47.529 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:19:47.529 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:19:47.529 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:19:47.529 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:47.529 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:47.529 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:47.529 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:47.529 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:47.529 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:47.529 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:47.529 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:47.529 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:47.529 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:47.529 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:47.529 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:47.529 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:47.529 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:19:47.529 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:19:47.529 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:19:47.529 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:19:47.529 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:19:47.529 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:47.529 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:47.529 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:19:47.530 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:19:47.530 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:19:47.530 Found net devices under 0000:da:00.0: mlx_0_0 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:19:47.530 Found net devices under 0000:da:00.1: mlx_0_1 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # rdma_device_init 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # uname 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@63 -- # modprobe ib_core 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # allocate_nic_ips 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # continue 2 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # continue 2 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:19:47.530 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:47.530 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:19:47.530 altname enp218s0f0np0 00:19:47.530 altname ens818f0np0 00:19:47.530 inet 192.168.100.8/24 scope global mlx_0_0 00:19:47.530 valid_lft forever preferred_lft forever 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:19:47.530 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:47.530 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:19:47.530 altname enp218s0f1np1 00:19:47.530 altname ens818f1np1 00:19:47.530 inet 192.168.100.9/24 scope global mlx_0_1 00:19:47.530 valid_lft forever preferred_lft forever 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # continue 2 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # continue 2 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:47.530 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:47.788 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:19:47.788 192.168.100.9' 00:19:47.788 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:19:47.788 192.168.100.9' 00:19:47.788 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@457 -- # head -n 1 00:19:47.788 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:47.788 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:19:47.788 192.168.100.9' 00:19:47.788 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@458 -- # tail -n +2 00:19:47.788 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@458 -- # head -n 1 00:19:47.788 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:47.788 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:19:47.788 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:47.788 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:19:47.788 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:19:47.788 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:19:47.788 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:19:47.788 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:47.788 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:47.788 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:47.788 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2884765 00:19:47.788 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2884765 00:19:47.788 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:47.788 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2884765 ']' 00:19:47.788 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:47.788 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:47.788 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:47.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:47.788 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:47.788 14:54:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:47.788 [2024-07-15 14:54:21.539659] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:19:47.788 [2024-07-15 14:54:21.539707] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:47.788 EAL: No free 2048 kB hugepages reported on node 1 00:19:47.788 [2024-07-15 14:54:21.597635] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:47.788 [2024-07-15 14:54:21.678712] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:47.788 [2024-07-15 14:54:21.678750] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:47.788 [2024-07-15 14:54:21.678758] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:47.788 [2024-07-15 14:54:21.678763] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:47.788 [2024-07-15 14:54:21.678768] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:47.788 [2024-07-15 14:54:21.678867] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:47.788 [2024-07-15 14:54:21.678956] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:47.788 [2024-07-15 14:54:21.679063] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:47.788 [2024-07-15 14:54:21.679065] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:48.720 14:54:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:48.720 14:54:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:19:48.720 14:54:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:48.720 14:54:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:48.720 14:54:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:48.720 14:54:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:48.720 14:54:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:48.720 14:54:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.720 14:54:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:48.720 [2024-07-15 14:54:22.401248] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x114be10/0x1150300) succeed. 00:19:48.720 [2024-07-15 14:54:22.410280] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x114d400/0x1191990) succeed. 00:19:48.720 14:54:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.720 14:54:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:19:48.720 14:54:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:19:48.720 14:54:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:48.720 14:54:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:48.721 14:54:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:48.721 14:54:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:48.721 14:54:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:48.721 14:54:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:48.721 14:54:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:48.721 14:54:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:48.721 14:54:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:48.721 14:54:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:48.721 14:54:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:48.721 14:54:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:48.721 14:54:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:48.721 14:54:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:48.721 14:54:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:48.721 14:54:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:48.721 14:54:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:48.721 14:54:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:48.721 14:54:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:48.721 14:54:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:48.721 14:54:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:48.721 14:54:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:48.721 14:54:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:48.721 14:54:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:19:48.721 14:54:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.721 14:54:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:48.721 Malloc1 00:19:48.721 [2024-07-15 14:54:22.618361] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:48.721 Malloc2 00:19:48.978 Malloc3 00:19:48.978 Malloc4 00:19:48.978 Malloc5 00:19:48.978 Malloc6 00:19:48.978 Malloc7 00:19:49.235 Malloc8 00:19:49.235 Malloc9 00:19:49.235 Malloc10 00:19:49.235 14:54:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.235 14:54:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:19:49.235 14:54:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:49.235 14:54:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:49.235 14:54:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=2885101 00:19:49.235 14:54:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 2885101 /var/tmp/bdevperf.sock 00:19:49.235 14:54:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2885101 ']' 00:19:49.235 14:54:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:49.235 14:54:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:19:49.235 14:54:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:49.235 14:54:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:49.235 14:54:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:49.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:49.235 14:54:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:19:49.235 14:54:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:49.235 14:54:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:19:49.235 14:54:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:49.235 14:54:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:49.235 14:54:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:49.235 { 00:19:49.235 "params": { 00:19:49.235 "name": "Nvme$subsystem", 00:19:49.235 "trtype": "$TEST_TRANSPORT", 00:19:49.235 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:49.235 "adrfam": "ipv4", 00:19:49.235 "trsvcid": "$NVMF_PORT", 00:19:49.235 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:49.235 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:49.235 "hdgst": ${hdgst:-false}, 00:19:49.235 "ddgst": ${ddgst:-false} 00:19:49.235 }, 00:19:49.235 "method": "bdev_nvme_attach_controller" 00:19:49.235 } 00:19:49.236 EOF 00:19:49.236 )") 00:19:49.236 14:54:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:49.236 14:54:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:49.236 14:54:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:49.236 { 00:19:49.236 "params": { 00:19:49.236 "name": "Nvme$subsystem", 00:19:49.236 "trtype": "$TEST_TRANSPORT", 00:19:49.236 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:49.236 "adrfam": "ipv4", 00:19:49.236 "trsvcid": "$NVMF_PORT", 00:19:49.236 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:49.236 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:49.236 "hdgst": ${hdgst:-false}, 00:19:49.236 "ddgst": ${ddgst:-false} 00:19:49.236 }, 00:19:49.236 "method": "bdev_nvme_attach_controller" 00:19:49.236 } 00:19:49.236 EOF 00:19:49.236 )") 00:19:49.236 14:54:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:49.236 14:54:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:49.236 14:54:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:49.236 { 00:19:49.236 "params": { 00:19:49.236 "name": "Nvme$subsystem", 00:19:49.236 "trtype": "$TEST_TRANSPORT", 00:19:49.236 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:49.236 "adrfam": "ipv4", 00:19:49.236 "trsvcid": "$NVMF_PORT", 00:19:49.236 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:49.236 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:49.236 "hdgst": ${hdgst:-false}, 00:19:49.236 "ddgst": ${ddgst:-false} 00:19:49.236 }, 00:19:49.236 "method": "bdev_nvme_attach_controller" 00:19:49.236 } 00:19:49.236 EOF 00:19:49.236 )") 00:19:49.236 14:54:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:49.236 14:54:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:49.236 14:54:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:49.236 { 00:19:49.236 "params": { 00:19:49.236 "name": "Nvme$subsystem", 00:19:49.236 "trtype": "$TEST_TRANSPORT", 00:19:49.236 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:49.236 "adrfam": "ipv4", 00:19:49.236 "trsvcid": "$NVMF_PORT", 00:19:49.236 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:49.236 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:49.236 "hdgst": ${hdgst:-false}, 00:19:49.236 "ddgst": ${ddgst:-false} 00:19:49.236 }, 00:19:49.236 "method": "bdev_nvme_attach_controller" 00:19:49.236 } 00:19:49.236 EOF 00:19:49.236 )") 00:19:49.236 14:54:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:49.236 14:54:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:49.236 14:54:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:49.236 { 00:19:49.236 "params": { 00:19:49.236 "name": "Nvme$subsystem", 00:19:49.236 "trtype": "$TEST_TRANSPORT", 00:19:49.236 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:49.236 "adrfam": "ipv4", 00:19:49.236 "trsvcid": "$NVMF_PORT", 00:19:49.236 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:49.236 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:49.236 "hdgst": ${hdgst:-false}, 00:19:49.236 "ddgst": ${ddgst:-false} 00:19:49.236 }, 00:19:49.236 "method": "bdev_nvme_attach_controller" 00:19:49.236 } 00:19:49.236 EOF 00:19:49.236 )") 00:19:49.236 14:54:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:49.236 14:54:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:49.236 14:54:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:49.236 { 00:19:49.236 "params": { 00:19:49.236 "name": "Nvme$subsystem", 00:19:49.236 "trtype": "$TEST_TRANSPORT", 00:19:49.236 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:49.236 "adrfam": "ipv4", 00:19:49.236 "trsvcid": "$NVMF_PORT", 00:19:49.236 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:49.236 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:49.236 "hdgst": ${hdgst:-false}, 00:19:49.236 "ddgst": ${ddgst:-false} 00:19:49.236 }, 00:19:49.236 "method": "bdev_nvme_attach_controller" 00:19:49.236 } 00:19:49.236 EOF 00:19:49.236 )") 00:19:49.236 14:54:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:49.236 14:54:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:49.236 14:54:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:49.236 { 00:19:49.236 "params": { 00:19:49.236 "name": "Nvme$subsystem", 00:19:49.236 "trtype": "$TEST_TRANSPORT", 00:19:49.236 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:49.236 "adrfam": "ipv4", 00:19:49.236 "trsvcid": "$NVMF_PORT", 00:19:49.236 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:49.236 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:49.236 "hdgst": ${hdgst:-false}, 00:19:49.236 "ddgst": ${ddgst:-false} 00:19:49.236 }, 00:19:49.236 "method": "bdev_nvme_attach_controller" 00:19:49.236 } 00:19:49.236 EOF 00:19:49.236 )") 00:19:49.236 [2024-07-15 14:54:23.089808] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:19:49.236 [2024-07-15 14:54:23.089858] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2885101 ] 00:19:49.236 14:54:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:49.236 14:54:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:49.236 14:54:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:49.236 { 00:19:49.236 "params": { 00:19:49.236 "name": "Nvme$subsystem", 00:19:49.236 "trtype": "$TEST_TRANSPORT", 00:19:49.236 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:49.236 "adrfam": "ipv4", 00:19:49.236 "trsvcid": "$NVMF_PORT", 00:19:49.236 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:49.236 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:49.236 "hdgst": ${hdgst:-false}, 00:19:49.236 "ddgst": ${ddgst:-false} 00:19:49.236 }, 00:19:49.236 "method": "bdev_nvme_attach_controller" 00:19:49.236 } 00:19:49.236 EOF 00:19:49.236 )") 00:19:49.236 14:54:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:49.236 14:54:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:49.236 14:54:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:49.236 { 00:19:49.236 "params": { 00:19:49.236 "name": "Nvme$subsystem", 00:19:49.236 "trtype": "$TEST_TRANSPORT", 00:19:49.236 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:49.236 "adrfam": "ipv4", 00:19:49.236 "trsvcid": "$NVMF_PORT", 00:19:49.236 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:49.236 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:49.236 "hdgst": ${hdgst:-false}, 00:19:49.236 "ddgst": ${ddgst:-false} 00:19:49.236 }, 00:19:49.236 "method": "bdev_nvme_attach_controller" 00:19:49.236 } 00:19:49.236 EOF 00:19:49.236 )") 00:19:49.236 14:54:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:49.236 14:54:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:49.236 14:54:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:49.236 { 00:19:49.236 "params": { 00:19:49.236 "name": "Nvme$subsystem", 00:19:49.236 "trtype": "$TEST_TRANSPORT", 00:19:49.236 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:49.236 "adrfam": "ipv4", 00:19:49.236 "trsvcid": "$NVMF_PORT", 00:19:49.236 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:49.236 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:49.236 "hdgst": ${hdgst:-false}, 00:19:49.236 "ddgst": ${ddgst:-false} 00:19:49.236 }, 00:19:49.236 "method": "bdev_nvme_attach_controller" 00:19:49.236 } 00:19:49.236 EOF 00:19:49.236 )") 00:19:49.236 14:54:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:49.236 14:54:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:19:49.236 EAL: No free 2048 kB hugepages reported on node 1 00:19:49.236 14:54:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:19:49.236 14:54:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:49.236 "params": { 00:19:49.236 "name": "Nvme1", 00:19:49.236 "trtype": "rdma", 00:19:49.236 "traddr": "192.168.100.8", 00:19:49.236 "adrfam": "ipv4", 00:19:49.236 "trsvcid": "4420", 00:19:49.236 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:49.236 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:49.236 "hdgst": false, 00:19:49.236 "ddgst": false 00:19:49.236 }, 00:19:49.236 "method": "bdev_nvme_attach_controller" 00:19:49.236 },{ 00:19:49.236 "params": { 00:19:49.236 "name": "Nvme2", 00:19:49.236 "trtype": "rdma", 00:19:49.236 "traddr": "192.168.100.8", 00:19:49.236 "adrfam": "ipv4", 00:19:49.236 "trsvcid": "4420", 00:19:49.236 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:49.236 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:49.236 "hdgst": false, 00:19:49.236 "ddgst": false 00:19:49.236 }, 00:19:49.236 "method": "bdev_nvme_attach_controller" 00:19:49.236 },{ 00:19:49.236 "params": { 00:19:49.236 "name": "Nvme3", 00:19:49.236 "trtype": "rdma", 00:19:49.236 "traddr": "192.168.100.8", 00:19:49.236 "adrfam": "ipv4", 00:19:49.236 "trsvcid": "4420", 00:19:49.236 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:49.237 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:49.237 "hdgst": false, 00:19:49.237 "ddgst": false 00:19:49.237 }, 00:19:49.237 "method": "bdev_nvme_attach_controller" 00:19:49.237 },{ 00:19:49.237 "params": { 00:19:49.237 "name": "Nvme4", 00:19:49.237 "trtype": "rdma", 00:19:49.237 "traddr": "192.168.100.8", 00:19:49.237 "adrfam": "ipv4", 00:19:49.237 "trsvcid": "4420", 00:19:49.237 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:49.237 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:49.237 "hdgst": false, 00:19:49.237 "ddgst": false 00:19:49.237 }, 00:19:49.237 "method": "bdev_nvme_attach_controller" 00:19:49.237 },{ 00:19:49.237 "params": { 00:19:49.237 "name": "Nvme5", 00:19:49.237 "trtype": "rdma", 00:19:49.237 "traddr": "192.168.100.8", 00:19:49.237 "adrfam": "ipv4", 00:19:49.237 "trsvcid": "4420", 00:19:49.237 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:49.237 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:49.237 "hdgst": false, 00:19:49.237 "ddgst": false 00:19:49.237 }, 00:19:49.237 "method": "bdev_nvme_attach_controller" 00:19:49.237 },{ 00:19:49.237 "params": { 00:19:49.237 "name": "Nvme6", 00:19:49.237 "trtype": "rdma", 00:19:49.237 "traddr": "192.168.100.8", 00:19:49.237 "adrfam": "ipv4", 00:19:49.237 "trsvcid": "4420", 00:19:49.237 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:49.237 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:49.237 "hdgst": false, 00:19:49.237 "ddgst": false 00:19:49.237 }, 00:19:49.237 "method": "bdev_nvme_attach_controller" 00:19:49.237 },{ 00:19:49.237 "params": { 00:19:49.237 "name": "Nvme7", 00:19:49.237 "trtype": "rdma", 00:19:49.237 "traddr": "192.168.100.8", 00:19:49.237 "adrfam": "ipv4", 00:19:49.237 "trsvcid": "4420", 00:19:49.237 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:49.237 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:49.237 "hdgst": false, 00:19:49.237 "ddgst": false 00:19:49.237 }, 00:19:49.237 "method": "bdev_nvme_attach_controller" 00:19:49.237 },{ 00:19:49.237 "params": { 00:19:49.237 "name": "Nvme8", 00:19:49.237 "trtype": "rdma", 00:19:49.237 "traddr": "192.168.100.8", 00:19:49.237 "adrfam": "ipv4", 00:19:49.237 "trsvcid": "4420", 00:19:49.237 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:49.237 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:49.237 "hdgst": false, 00:19:49.237 "ddgst": false 00:19:49.237 }, 00:19:49.237 "method": "bdev_nvme_attach_controller" 00:19:49.237 },{ 00:19:49.237 "params": { 00:19:49.237 "name": "Nvme9", 00:19:49.237 "trtype": "rdma", 00:19:49.237 "traddr": "192.168.100.8", 00:19:49.237 "adrfam": "ipv4", 00:19:49.237 "trsvcid": "4420", 00:19:49.237 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:49.237 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:49.237 "hdgst": false, 00:19:49.237 "ddgst": false 00:19:49.237 }, 00:19:49.237 "method": "bdev_nvme_attach_controller" 00:19:49.237 },{ 00:19:49.237 "params": { 00:19:49.237 "name": "Nvme10", 00:19:49.237 "trtype": "rdma", 00:19:49.237 "traddr": "192.168.100.8", 00:19:49.237 "adrfam": "ipv4", 00:19:49.237 "trsvcid": "4420", 00:19:49.237 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:49.237 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:49.237 "hdgst": false, 00:19:49.237 "ddgst": false 00:19:49.237 }, 00:19:49.237 "method": "bdev_nvme_attach_controller" 00:19:49.237 }' 00:19:49.237 [2024-07-15 14:54:23.148660] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:49.498 [2024-07-15 14:54:23.223307] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:50.429 Running I/O for 10 seconds... 00:19:50.429 14:54:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:50.430 14:54:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:19:50.430 14:54:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:50.430 14:54:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.430 14:54:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:50.430 14:54:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.430 14:54:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:19:50.430 14:54:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:19:50.430 14:54:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:19:50.430 14:54:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:19:50.430 14:54:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:19:50.430 14:54:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:19:50.430 14:54:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:50.430 14:54:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:50.430 14:54:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:50.430 14:54:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.430 14:54:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:50.697 14:54:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.697 14:54:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=4 00:19:50.697 14:54:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 4 -ge 100 ']' 00:19:50.697 14:54:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:19:50.953 14:54:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:19:50.953 14:54:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:50.953 14:54:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:50.953 14:54:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:50.953 14:54:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.953 14:54:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:50.953 14:54:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.953 14:54:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=148 00:19:50.953 14:54:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 148 -ge 100 ']' 00:19:50.953 14:54:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:19:50.953 14:54:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:19:50.953 14:54:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:19:50.953 14:54:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 2885101 00:19:50.953 14:54:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 2885101 ']' 00:19:50.953 14:54:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 2885101 00:19:50.953 14:54:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:19:50.953 14:54:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:50.953 14:54:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2885101 00:19:50.953 14:54:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:50.953 14:54:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:50.953 14:54:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2885101' 00:19:50.953 killing process with pid 2885101 00:19:50.953 14:54:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 2885101 00:19:50.953 14:54:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 2885101 00:19:51.210 Received shutdown signal, test time was about 0.816492 seconds 00:19:51.210 00:19:51.210 Latency(us) 00:19:51.210 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:51.210 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:51.210 Verification LBA range: start 0x0 length 0x400 00:19:51.210 Nvme1n1 : 0.80 339.77 21.24 0.00 0.00 183898.33 6366.35 205720.62 00:19:51.210 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:51.210 Verification LBA range: start 0x0 length 0x400 00:19:51.210 Nvme2n1 : 0.80 338.14 21.13 0.00 0.00 180869.49 7458.62 190740.97 00:19:51.210 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:51.210 Verification LBA range: start 0x0 length 0x400 00:19:51.210 Nvme3n1 : 0.81 357.52 22.34 0.00 0.00 168079.36 7677.07 183750.46 00:19:51.210 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:51.210 Verification LBA range: start 0x0 length 0x400 00:19:51.210 Nvme4n1 : 0.81 396.66 24.79 0.00 0.00 148446.99 5180.46 129823.70 00:19:51.210 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:51.210 Verification LBA range: start 0x0 length 0x400 00:19:51.210 Nvme5n1 : 0.81 395.94 24.75 0.00 0.00 146056.92 8550.89 122833.19 00:19:51.210 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:51.210 Verification LBA range: start 0x0 length 0x400 00:19:51.210 Nvme6n1 : 0.81 395.21 24.70 0.00 0.00 143247.26 9237.46 113845.39 00:19:51.210 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:51.210 Verification LBA range: start 0x0 length 0x400 00:19:51.210 Nvme7n1 : 0.81 394.61 24.66 0.00 0.00 139910.00 9736.78 108852.18 00:19:51.210 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:51.210 Verification LBA range: start 0x0 length 0x400 00:19:51.211 Nvme8n1 : 0.81 393.93 24.62 0.00 0.00 137362.87 10236.10 104358.28 00:19:51.211 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:51.211 Verification LBA range: start 0x0 length 0x400 00:19:51.211 Nvme9n1 : 0.81 393.11 24.57 0.00 0.00 135159.61 11109.91 92873.87 00:19:51.211 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:51.211 Verification LBA range: start 0x0 length 0x400 00:19:51.211 Nvme10n1 : 0.82 313.84 19.61 0.00 0.00 165279.70 8363.64 209715.20 00:19:51.211 =================================================================================================================== 00:19:51.211 Total : 3718.74 232.42 0.00 0.00 153598.53 5180.46 209715.20 00:19:51.467 14:54:25 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:19:52.395 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 2884765 00:19:52.395 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:19:52.395 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:19:52.395 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:19:52.396 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:52.396 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:19:52.396 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:52.396 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:19:52.396 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:19:52.396 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:19:52.396 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:19:52.396 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:52.396 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:19:52.396 rmmod nvme_rdma 00:19:52.396 rmmod nvme_fabrics 00:19:52.396 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:52.396 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:19:52.396 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:19:52.396 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 2884765 ']' 00:19:52.396 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 2884765 00:19:52.396 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 2884765 ']' 00:19:52.396 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 2884765 00:19:52.396 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:19:52.396 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:52.396 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2884765 00:19:52.396 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:52.396 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:52.396 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2884765' 00:19:52.396 killing process with pid 2884765 00:19:52.396 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 2884765 00:19:52.396 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 2884765 00:19:52.961 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:52.961 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:19:52.961 00:19:52.961 real 0m5.483s 00:19:52.961 user 0m22.136s 00:19:52.961 sys 0m1.034s 00:19:52.961 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:52.961 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:52.961 ************************************ 00:19:52.961 END TEST nvmf_shutdown_tc2 00:19:52.961 ************************************ 00:19:52.961 14:54:26 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:19:52.961 14:54:26 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:19:52.961 14:54:26 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:52.961 14:54:26 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:52.961 14:54:26 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:52.961 ************************************ 00:19:52.961 START TEST nvmf_shutdown_tc3 00:19:52.961 ************************************ 00:19:52.961 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:19:52.961 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:19:52.961 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:19:52.961 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:19:52.961 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:52.961 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:52.961 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:52.961 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:52.961 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:52.961 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:52.961 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:52.961 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:52.961 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:52.961 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:52.961 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:52.961 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:52.961 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:52.961 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:52.961 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:52.961 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:52.961 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:52.961 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:52.961 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:19:52.961 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:52.962 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:19:52.962 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:19:52.962 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:19:52.962 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:19:52.962 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:19:52.962 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:52.962 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:52.962 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:52.962 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:52.962 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:52.962 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:52.962 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:52.962 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:52.962 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:52.962 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:52.962 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:52.962 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:52.962 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:52.962 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:19:52.962 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:19:52.962 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:19:52.962 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:19:52.962 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:19:52.962 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:52.962 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:52.962 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:19:52.962 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:19:52.962 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:19:52.962 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:19:52.962 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:52.962 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:52.962 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:19:52.962 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:19:52.962 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:52.962 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:19:52.962 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:19:52.962 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:19:52.962 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:19:52.962 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:52.962 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:52.962 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:19:52.962 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:19:52.962 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:52.962 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:19:52.962 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:52.962 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:52.962 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:19:52.962 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:52.962 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:52.962 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:19:52.962 Found net devices under 0000:da:00.0: mlx_0_0 00:19:52.962 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:52.962 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:52.962 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:52.962 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:19:52.962 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:52.962 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:52.962 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:19:52.962 Found net devices under 0000:da:00.1: mlx_0_1 00:19:52.962 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:52.962 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:52.962 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:19:52.962 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:52.962 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:19:52.962 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:19:52.962 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # rdma_device_init 00:19:52.962 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:19:52.962 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # uname 00:19:52.962 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:19:52.962 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:19:52.962 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@63 -- # modprobe ib_core 00:19:52.962 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:19:52.962 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:19:52.962 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:19:52.962 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:19:52.962 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:19:53.220 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # allocate_nic_ips 00:19:53.220 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:53.220 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:19:53.220 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:53.220 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:53.220 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:53.220 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:53.220 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:53.220 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:53.220 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:53.220 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:53.220 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:53.220 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # continue 2 00:19:53.220 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:53.220 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:53.220 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:53.220 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:53.220 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:53.220 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:53.220 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # continue 2 00:19:53.220 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:19:53.220 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:19:53.220 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:53.221 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:53.221 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:53.221 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:53.221 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:19:53.221 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:19:53.221 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:19:53.221 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:53.221 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:19:53.221 altname enp218s0f0np0 00:19:53.221 altname ens818f0np0 00:19:53.221 inet 192.168.100.8/24 scope global mlx_0_0 00:19:53.221 valid_lft forever preferred_lft forever 00:19:53.221 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:19:53.221 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:19:53.221 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:53.221 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:53.221 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:53.221 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:53.221 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:19:53.221 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:19:53.221 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:19:53.221 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:53.221 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:19:53.221 altname enp218s0f1np1 00:19:53.221 altname ens818f1np1 00:19:53.221 inet 192.168.100.9/24 scope global mlx_0_1 00:19:53.221 valid_lft forever preferred_lft forever 00:19:53.221 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:19:53.221 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:53.221 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:53.221 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:19:53.221 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:19:53.221 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:19:53.221 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:53.221 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:53.221 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:53.221 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:53.221 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:53.221 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:53.221 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:53.221 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:53.221 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:53.221 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # continue 2 00:19:53.221 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:53.221 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:53.221 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:53.221 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:53.221 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:53.221 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:53.221 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # continue 2 00:19:53.221 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:19:53.221 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:19:53.221 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:53.221 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:53.221 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:53.221 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:53.221 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:19:53.221 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:19:53.221 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:53.221 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:53.221 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:53.221 14:54:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:53.221 14:54:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:19:53.221 192.168.100.9' 00:19:53.221 14:54:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:19:53.221 192.168.100.9' 00:19:53.221 14:54:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@457 -- # head -n 1 00:19:53.221 14:54:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:53.221 14:54:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:19:53.221 192.168.100.9' 00:19:53.221 14:54:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@458 -- # tail -n +2 00:19:53.221 14:54:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@458 -- # head -n 1 00:19:53.221 14:54:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:53.221 14:54:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:19:53.221 14:54:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:53.221 14:54:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:19:53.221 14:54:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:19:53.221 14:54:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:19:53.221 14:54:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:19:53.221 14:54:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:53.221 14:54:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:53.221 14:54:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:53.221 14:54:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=2885845 00:19:53.221 14:54:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 2885845 00:19:53.221 14:54:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:53.221 14:54:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 2885845 ']' 00:19:53.221 14:54:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:53.221 14:54:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:53.221 14:54:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:53.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:53.221 14:54:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:53.221 14:54:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:53.221 [2024-07-15 14:54:27.096388] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:19:53.221 [2024-07-15 14:54:27.096436] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:53.221 EAL: No free 2048 kB hugepages reported on node 1 00:19:53.479 [2024-07-15 14:54:27.150717] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:53.479 [2024-07-15 14:54:27.232860] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:53.479 [2024-07-15 14:54:27.232899] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:53.479 [2024-07-15 14:54:27.232907] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:53.479 [2024-07-15 14:54:27.232913] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:53.479 [2024-07-15 14:54:27.232918] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:53.479 [2024-07-15 14:54:27.232958] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:53.479 [2024-07-15 14:54:27.233046] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:53.479 [2024-07-15 14:54:27.233158] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:53.479 [2024-07-15 14:54:27.233160] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:54.045 14:54:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:54.045 14:54:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:19:54.045 14:54:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:54.045 14:54:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:54.045 14:54:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:54.045 14:54:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:54.045 14:54:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:54.045 14:54:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.045 14:54:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:54.303 [2024-07-15 14:54:27.967563] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x79de10/0x7a2300) succeed. 00:19:54.303 [2024-07-15 14:54:27.976652] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x79f400/0x7e3990) succeed. 00:19:54.303 14:54:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.303 14:54:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:19:54.303 14:54:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:19:54.303 14:54:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:54.303 14:54:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:54.303 14:54:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:54.303 14:54:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:54.303 14:54:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:54.303 14:54:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:54.303 14:54:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:54.303 14:54:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:54.303 14:54:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:54.303 14:54:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:54.303 14:54:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:54.303 14:54:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:54.303 14:54:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:54.303 14:54:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:54.303 14:54:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:54.303 14:54:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:54.303 14:54:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:54.303 14:54:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:54.303 14:54:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:54.303 14:54:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:54.303 14:54:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:54.303 14:54:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:54.303 14:54:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:54.303 14:54:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:19:54.303 14:54:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.303 14:54:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:54.303 Malloc1 00:19:54.303 [2024-07-15 14:54:28.184170] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:54.303 Malloc2 00:19:54.561 Malloc3 00:19:54.561 Malloc4 00:19:54.561 Malloc5 00:19:54.561 Malloc6 00:19:54.561 Malloc7 00:19:54.561 Malloc8 00:19:54.820 Malloc9 00:19:54.820 Malloc10 00:19:54.820 14:54:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.820 14:54:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:19:54.820 14:54:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:54.820 14:54:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:54.820 14:54:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=2886174 00:19:54.820 14:54:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 2886174 /var/tmp/bdevperf.sock 00:19:54.820 14:54:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 2886174 ']' 00:19:54.820 14:54:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:54.820 14:54:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:19:54.820 14:54:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:54.820 14:54:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:54.820 14:54:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:54.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:54.820 14:54:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:19:54.820 14:54:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:54.820 14:54:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:19:54.820 14:54:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:54.820 14:54:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:54.820 14:54:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:54.820 { 00:19:54.820 "params": { 00:19:54.820 "name": "Nvme$subsystem", 00:19:54.820 "trtype": "$TEST_TRANSPORT", 00:19:54.820 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:54.820 "adrfam": "ipv4", 00:19:54.820 "trsvcid": "$NVMF_PORT", 00:19:54.820 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:54.820 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:54.820 "hdgst": ${hdgst:-false}, 00:19:54.820 "ddgst": ${ddgst:-false} 00:19:54.820 }, 00:19:54.820 "method": "bdev_nvme_attach_controller" 00:19:54.820 } 00:19:54.820 EOF 00:19:54.820 )") 00:19:54.820 14:54:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:54.820 14:54:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:54.820 14:54:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:54.820 { 00:19:54.820 "params": { 00:19:54.820 "name": "Nvme$subsystem", 00:19:54.820 "trtype": "$TEST_TRANSPORT", 00:19:54.820 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:54.820 "adrfam": "ipv4", 00:19:54.820 "trsvcid": "$NVMF_PORT", 00:19:54.821 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:54.821 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:54.821 "hdgst": ${hdgst:-false}, 00:19:54.821 "ddgst": ${ddgst:-false} 00:19:54.821 }, 00:19:54.821 "method": "bdev_nvme_attach_controller" 00:19:54.821 } 00:19:54.821 EOF 00:19:54.821 )") 00:19:54.821 14:54:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:54.821 14:54:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:54.821 14:54:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:54.821 { 00:19:54.821 "params": { 00:19:54.821 "name": "Nvme$subsystem", 00:19:54.821 "trtype": "$TEST_TRANSPORT", 00:19:54.821 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:54.821 "adrfam": "ipv4", 00:19:54.821 "trsvcid": "$NVMF_PORT", 00:19:54.821 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:54.821 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:54.821 "hdgst": ${hdgst:-false}, 00:19:54.821 "ddgst": ${ddgst:-false} 00:19:54.821 }, 00:19:54.821 "method": "bdev_nvme_attach_controller" 00:19:54.821 } 00:19:54.821 EOF 00:19:54.821 )") 00:19:54.821 14:54:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:54.821 14:54:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:54.821 14:54:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:54.821 { 00:19:54.821 "params": { 00:19:54.821 "name": "Nvme$subsystem", 00:19:54.821 "trtype": "$TEST_TRANSPORT", 00:19:54.821 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:54.821 "adrfam": "ipv4", 00:19:54.821 "trsvcid": "$NVMF_PORT", 00:19:54.821 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:54.821 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:54.821 "hdgst": ${hdgst:-false}, 00:19:54.821 "ddgst": ${ddgst:-false} 00:19:54.821 }, 00:19:54.821 "method": "bdev_nvme_attach_controller" 00:19:54.821 } 00:19:54.821 EOF 00:19:54.821 )") 00:19:54.821 14:54:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:54.821 14:54:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:54.821 14:54:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:54.821 { 00:19:54.821 "params": { 00:19:54.821 "name": "Nvme$subsystem", 00:19:54.821 "trtype": "$TEST_TRANSPORT", 00:19:54.821 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:54.821 "adrfam": "ipv4", 00:19:54.821 "trsvcid": "$NVMF_PORT", 00:19:54.821 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:54.821 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:54.821 "hdgst": ${hdgst:-false}, 00:19:54.821 "ddgst": ${ddgst:-false} 00:19:54.821 }, 00:19:54.821 "method": "bdev_nvme_attach_controller" 00:19:54.821 } 00:19:54.821 EOF 00:19:54.821 )") 00:19:54.821 14:54:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:54.821 14:54:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:54.821 14:54:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:54.821 { 00:19:54.821 "params": { 00:19:54.821 "name": "Nvme$subsystem", 00:19:54.821 "trtype": "$TEST_TRANSPORT", 00:19:54.821 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:54.821 "adrfam": "ipv4", 00:19:54.821 "trsvcid": "$NVMF_PORT", 00:19:54.821 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:54.821 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:54.821 "hdgst": ${hdgst:-false}, 00:19:54.821 "ddgst": ${ddgst:-false} 00:19:54.821 }, 00:19:54.821 "method": "bdev_nvme_attach_controller" 00:19:54.821 } 00:19:54.821 EOF 00:19:54.821 )") 00:19:54.821 14:54:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:54.821 14:54:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:54.821 14:54:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:54.821 { 00:19:54.821 "params": { 00:19:54.821 "name": "Nvme$subsystem", 00:19:54.821 "trtype": "$TEST_TRANSPORT", 00:19:54.821 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:54.821 "adrfam": "ipv4", 00:19:54.821 "trsvcid": "$NVMF_PORT", 00:19:54.821 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:54.821 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:54.821 "hdgst": ${hdgst:-false}, 00:19:54.821 "ddgst": ${ddgst:-false} 00:19:54.821 }, 00:19:54.821 "method": "bdev_nvme_attach_controller" 00:19:54.821 } 00:19:54.821 EOF 00:19:54.821 )") 00:19:54.821 14:54:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:54.821 [2024-07-15 14:54:28.655429] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:19:54.821 [2024-07-15 14:54:28.655480] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2886174 ] 00:19:54.821 14:54:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:54.821 14:54:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:54.821 { 00:19:54.821 "params": { 00:19:54.821 "name": "Nvme$subsystem", 00:19:54.821 "trtype": "$TEST_TRANSPORT", 00:19:54.821 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:54.821 "adrfam": "ipv4", 00:19:54.821 "trsvcid": "$NVMF_PORT", 00:19:54.821 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:54.821 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:54.821 "hdgst": ${hdgst:-false}, 00:19:54.821 "ddgst": ${ddgst:-false} 00:19:54.821 }, 00:19:54.821 "method": "bdev_nvme_attach_controller" 00:19:54.821 } 00:19:54.821 EOF 00:19:54.821 )") 00:19:54.821 14:54:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:54.821 14:54:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:54.821 14:54:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:54.821 { 00:19:54.821 "params": { 00:19:54.821 "name": "Nvme$subsystem", 00:19:54.821 "trtype": "$TEST_TRANSPORT", 00:19:54.821 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:54.821 "adrfam": "ipv4", 00:19:54.821 "trsvcid": "$NVMF_PORT", 00:19:54.821 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:54.821 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:54.821 "hdgst": ${hdgst:-false}, 00:19:54.821 "ddgst": ${ddgst:-false} 00:19:54.821 }, 00:19:54.821 "method": "bdev_nvme_attach_controller" 00:19:54.821 } 00:19:54.821 EOF 00:19:54.821 )") 00:19:54.821 14:54:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:54.821 14:54:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:54.821 14:54:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:54.821 { 00:19:54.821 "params": { 00:19:54.821 "name": "Nvme$subsystem", 00:19:54.822 "trtype": "$TEST_TRANSPORT", 00:19:54.822 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:54.822 "adrfam": "ipv4", 00:19:54.822 "trsvcid": "$NVMF_PORT", 00:19:54.822 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:54.822 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:54.822 "hdgst": ${hdgst:-false}, 00:19:54.822 "ddgst": ${ddgst:-false} 00:19:54.822 }, 00:19:54.822 "method": "bdev_nvme_attach_controller" 00:19:54.822 } 00:19:54.822 EOF 00:19:54.822 )") 00:19:54.822 14:54:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:54.822 14:54:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:19:54.822 EAL: No free 2048 kB hugepages reported on node 1 00:19:54.822 14:54:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:19:54.822 14:54:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:54.822 "params": { 00:19:54.822 "name": "Nvme1", 00:19:54.822 "trtype": "rdma", 00:19:54.822 "traddr": "192.168.100.8", 00:19:54.822 "adrfam": "ipv4", 00:19:54.822 "trsvcid": "4420", 00:19:54.822 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:54.822 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:54.822 "hdgst": false, 00:19:54.822 "ddgst": false 00:19:54.822 }, 00:19:54.822 "method": "bdev_nvme_attach_controller" 00:19:54.822 },{ 00:19:54.822 "params": { 00:19:54.822 "name": "Nvme2", 00:19:54.822 "trtype": "rdma", 00:19:54.822 "traddr": "192.168.100.8", 00:19:54.822 "adrfam": "ipv4", 00:19:54.822 "trsvcid": "4420", 00:19:54.822 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:54.822 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:54.822 "hdgst": false, 00:19:54.822 "ddgst": false 00:19:54.822 }, 00:19:54.822 "method": "bdev_nvme_attach_controller" 00:19:54.822 },{ 00:19:54.822 "params": { 00:19:54.822 "name": "Nvme3", 00:19:54.822 "trtype": "rdma", 00:19:54.822 "traddr": "192.168.100.8", 00:19:54.822 "adrfam": "ipv4", 00:19:54.822 "trsvcid": "4420", 00:19:54.822 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:54.822 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:54.822 "hdgst": false, 00:19:54.822 "ddgst": false 00:19:54.822 }, 00:19:54.822 "method": "bdev_nvme_attach_controller" 00:19:54.822 },{ 00:19:54.822 "params": { 00:19:54.822 "name": "Nvme4", 00:19:54.822 "trtype": "rdma", 00:19:54.822 "traddr": "192.168.100.8", 00:19:54.822 "adrfam": "ipv4", 00:19:54.822 "trsvcid": "4420", 00:19:54.822 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:54.822 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:54.822 "hdgst": false, 00:19:54.822 "ddgst": false 00:19:54.822 }, 00:19:54.822 "method": "bdev_nvme_attach_controller" 00:19:54.822 },{ 00:19:54.822 "params": { 00:19:54.822 "name": "Nvme5", 00:19:54.822 "trtype": "rdma", 00:19:54.822 "traddr": "192.168.100.8", 00:19:54.822 "adrfam": "ipv4", 00:19:54.822 "trsvcid": "4420", 00:19:54.822 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:54.822 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:54.822 "hdgst": false, 00:19:54.822 "ddgst": false 00:19:54.822 }, 00:19:54.822 "method": "bdev_nvme_attach_controller" 00:19:54.822 },{ 00:19:54.822 "params": { 00:19:54.822 "name": "Nvme6", 00:19:54.822 "trtype": "rdma", 00:19:54.822 "traddr": "192.168.100.8", 00:19:54.822 "adrfam": "ipv4", 00:19:54.822 "trsvcid": "4420", 00:19:54.822 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:54.822 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:54.822 "hdgst": false, 00:19:54.822 "ddgst": false 00:19:54.822 }, 00:19:54.822 "method": "bdev_nvme_attach_controller" 00:19:54.822 },{ 00:19:54.822 "params": { 00:19:54.822 "name": "Nvme7", 00:19:54.822 "trtype": "rdma", 00:19:54.822 "traddr": "192.168.100.8", 00:19:54.822 "adrfam": "ipv4", 00:19:54.822 "trsvcid": "4420", 00:19:54.822 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:54.822 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:54.822 "hdgst": false, 00:19:54.822 "ddgst": false 00:19:54.822 }, 00:19:54.822 "method": "bdev_nvme_attach_controller" 00:19:54.822 },{ 00:19:54.822 "params": { 00:19:54.822 "name": "Nvme8", 00:19:54.822 "trtype": "rdma", 00:19:54.822 "traddr": "192.168.100.8", 00:19:54.822 "adrfam": "ipv4", 00:19:54.822 "trsvcid": "4420", 00:19:54.822 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:54.822 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:54.822 "hdgst": false, 00:19:54.822 "ddgst": false 00:19:54.822 }, 00:19:54.822 "method": "bdev_nvme_attach_controller" 00:19:54.822 },{ 00:19:54.822 "params": { 00:19:54.822 "name": "Nvme9", 00:19:54.822 "trtype": "rdma", 00:19:54.822 "traddr": "192.168.100.8", 00:19:54.822 "adrfam": "ipv4", 00:19:54.822 "trsvcid": "4420", 00:19:54.822 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:54.822 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:54.822 "hdgst": false, 00:19:54.822 "ddgst": false 00:19:54.822 }, 00:19:54.822 "method": "bdev_nvme_attach_controller" 00:19:54.822 },{ 00:19:54.822 "params": { 00:19:54.822 "name": "Nvme10", 00:19:54.822 "trtype": "rdma", 00:19:54.822 "traddr": "192.168.100.8", 00:19:54.822 "adrfam": "ipv4", 00:19:54.822 "trsvcid": "4420", 00:19:54.822 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:54.822 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:54.822 "hdgst": false, 00:19:54.822 "ddgst": false 00:19:54.822 }, 00:19:54.822 "method": "bdev_nvme_attach_controller" 00:19:54.822 }' 00:19:54.822 [2024-07-15 14:54:28.713106] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:55.082 [2024-07-15 14:54:28.788418] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:55.790 Running I/O for 10 seconds... 00:19:55.790 14:54:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:55.790 14:54:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:19:55.790 14:54:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:55.790 14:54:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.790 14:54:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:56.079 14:54:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.079 14:54:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:56.079 14:54:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:19:56.079 14:54:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:19:56.079 14:54:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:19:56.079 14:54:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:19:56.079 14:54:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:19:56.079 14:54:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:19:56.079 14:54:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:56.079 14:54:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:56.079 14:54:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:56.079 14:54:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.079 14:54:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:56.079 14:54:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.079 14:54:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:19:56.079 14:54:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:19:56.079 14:54:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:19:56.336 14:54:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:19:56.336 14:54:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:56.336 14:54:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:56.336 14:54:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:56.336 14:54:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.336 14:54:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:56.594 14:54:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.594 14:54:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=146 00:19:56.594 14:54:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 146 -ge 100 ']' 00:19:56.594 14:54:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:19:56.594 14:54:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:19:56.594 14:54:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:19:56.594 14:54:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 2885845 00:19:56.594 14:54:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 2885845 ']' 00:19:56.594 14:54:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 2885845 00:19:56.594 14:54:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:19:56.594 14:54:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:56.594 14:54:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2885845 00:19:56.594 14:54:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:56.594 14:54:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:56.594 14:54:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2885845' 00:19:56.594 killing process with pid 2885845 00:19:56.594 14:54:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 2885845 00:19:56.594 14:54:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 2885845 00:19:57.159 14:54:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:19:57.159 14:54:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:19:57.730 [2024-07-15 14:54:31.458215] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256900 was disconnected and freed. reset controller. 00:19:57.730 [2024-07-15 14:54:31.460248] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256680 was disconnected and freed. reset controller. 00:19:57.730 [2024-07-15 14:54:31.462112] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256400 was disconnected and freed. reset controller. 00:19:57.730 [2024-07-15 14:54:31.464278] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256180 was disconnected and freed. reset controller. 00:19:57.730 [2024-07-15 14:54:31.466566] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b60ee80 was disconnected and freed. reset controller. 00:19:57.730 [2024-07-15 14:54:31.469118] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b806bc0 was disconnected and freed. reset controller. 00:19:57.730 [2024-07-15 14:54:31.469170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a44f900 len:0x10000 key:0x183200 00:19:57.730 [2024-07-15 14:54:31.469199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.730 [2024-07-15 14:54:31.469241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a43f880 len:0x10000 key:0x183200 00:19:57.730 [2024-07-15 14:54:31.469265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.730 [2024-07-15 14:54:31.469293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a42f800 len:0x10000 key:0x183200 00:19:57.730 [2024-07-15 14:54:31.469315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.730 [2024-07-15 14:54:31.469342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a41f780 len:0x10000 key:0x183200 00:19:57.730 [2024-07-15 14:54:31.469364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.730 [2024-07-15 14:54:31.469393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a40f700 len:0x10000 key:0x183200 00:19:57.730 [2024-07-15 14:54:31.469414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.730 [2024-07-15 14:54:31.469441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a7f0000 len:0x10000 key:0x183600 00:19:57.730 [2024-07-15 14:54:31.469463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.730 [2024-07-15 14:54:31.469490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a7dff80 len:0x10000 key:0x183600 00:19:57.730 [2024-07-15 14:54:31.469513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.730 [2024-07-15 14:54:31.469592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a7cff00 len:0x10000 key:0x183600 00:19:57.730 [2024-07-15 14:54:31.469616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.730 [2024-07-15 14:54:31.469644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a7bfe80 len:0x10000 key:0x183600 00:19:57.730 [2024-07-15 14:54:31.469666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.730 [2024-07-15 14:54:31.469701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a7afe00 len:0x10000 key:0x183600 00:19:57.730 [2024-07-15 14:54:31.469733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.730 [2024-07-15 14:54:31.469746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a79fd80 len:0x10000 key:0x183600 00:19:57.730 [2024-07-15 14:54:31.469758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.730 [2024-07-15 14:54:31.469771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a78fd00 len:0x10000 key:0x183600 00:19:57.730 [2024-07-15 14:54:31.469781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.730 [2024-07-15 14:54:31.469794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a77fc80 len:0x10000 key:0x183600 00:19:57.730 [2024-07-15 14:54:31.469804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.730 [2024-07-15 14:54:31.469818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a76fc00 len:0x10000 key:0x183600 00:19:57.730 [2024-07-15 14:54:31.469829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.730 [2024-07-15 14:54:31.469841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a75fb80 len:0x10000 key:0x183600 00:19:57.730 [2024-07-15 14:54:31.469851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.730 [2024-07-15 14:54:31.469865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a74fb00 len:0x10000 key:0x183600 00:19:57.730 [2024-07-15 14:54:31.469876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.730 [2024-07-15 14:54:31.469889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a73fa80 len:0x10000 key:0x183600 00:19:57.730 [2024-07-15 14:54:31.469899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.730 [2024-07-15 14:54:31.469912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a72fa00 len:0x10000 key:0x183600 00:19:57.730 [2024-07-15 14:54:31.469922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.730 [2024-07-15 14:54:31.469936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a71f980 len:0x10000 key:0x183600 00:19:57.730 [2024-07-15 14:54:31.469947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.730 [2024-07-15 14:54:31.469960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a70f900 len:0x10000 key:0x183600 00:19:57.730 [2024-07-15 14:54:31.469970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.730 [2024-07-15 14:54:31.469984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6ff880 len:0x10000 key:0x183600 00:19:57.730 [2024-07-15 14:54:31.469997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.730 [2024-07-15 14:54:31.470011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6ef800 len:0x10000 key:0x183600 00:19:57.730 [2024-07-15 14:54:31.470021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.730 [2024-07-15 14:54:31.470034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6df780 len:0x10000 key:0x183600 00:19:57.730 [2024-07-15 14:54:31.470045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.730 [2024-07-15 14:54:31.470058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6cf700 len:0x10000 key:0x183600 00:19:57.730 [2024-07-15 14:54:31.470069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.730 [2024-07-15 14:54:31.470082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6bf680 len:0x10000 key:0x183600 00:19:57.730 [2024-07-15 14:54:31.470092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.730 [2024-07-15 14:54:31.470105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6af600 len:0x10000 key:0x183600 00:19:57.730 [2024-07-15 14:54:31.470116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.730 [2024-07-15 14:54:31.470129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a69f580 len:0x10000 key:0x183600 00:19:57.730 [2024-07-15 14:54:31.470139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.730 [2024-07-15 14:54:31.470152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a68f500 len:0x10000 key:0x183600 00:19:57.730 [2024-07-15 14:54:31.470163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.730 [2024-07-15 14:54:31.470176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a67f480 len:0x10000 key:0x183600 00:19:57.730 [2024-07-15 14:54:31.470186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.730 [2024-07-15 14:54:31.470199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a66f400 len:0x10000 key:0x183600 00:19:57.730 [2024-07-15 14:54:31.470209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.730 [2024-07-15 14:54:31.470222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a65f380 len:0x10000 key:0x183600 00:19:57.730 [2024-07-15 14:54:31.470232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.731 [2024-07-15 14:54:31.470245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a64f300 len:0x10000 key:0x183600 00:19:57.731 [2024-07-15 14:54:31.470257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.731 [2024-07-15 14:54:31.470270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a63f280 len:0x10000 key:0x183600 00:19:57.731 [2024-07-15 14:54:31.470280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.731 [2024-07-15 14:54:31.470294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a62f200 len:0x10000 key:0x183600 00:19:57.731 [2024-07-15 14:54:31.470304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.731 [2024-07-15 14:54:31.470318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a61f180 len:0x10000 key:0x183600 00:19:57.731 [2024-07-15 14:54:31.470328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.731 [2024-07-15 14:54:31.470342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a60f100 len:0x10000 key:0x183600 00:19:57.731 [2024-07-15 14:54:31.470352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.731 [2024-07-15 14:54:31.470365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9f0000 len:0x10000 key:0x183b00 00:19:57.731 [2024-07-15 14:54:31.470375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.731 [2024-07-15 14:54:31.470389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9dff80 len:0x10000 key:0x183b00 00:19:57.731 [2024-07-15 14:54:31.470400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.731 [2024-07-15 14:54:31.470413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9cff00 len:0x10000 key:0x183b00 00:19:57.731 [2024-07-15 14:54:31.470423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.731 [2024-07-15 14:54:31.470436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9bfe80 len:0x10000 key:0x183b00 00:19:57.731 [2024-07-15 14:54:31.470447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.731 [2024-07-15 14:54:31.470460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9afe00 len:0x10000 key:0x183b00 00:19:57.731 [2024-07-15 14:54:31.470470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.731 [2024-07-15 14:54:31.470483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a99fd80 len:0x10000 key:0x183b00 00:19:57.731 [2024-07-15 14:54:31.470494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.731 [2024-07-15 14:54:31.470508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a98fd00 len:0x10000 key:0x183b00 00:19:57.731 [2024-07-15 14:54:31.470520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.731 [2024-07-15 14:54:31.470535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a97fc80 len:0x10000 key:0x183b00 00:19:57.731 [2024-07-15 14:54:31.470554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.731 [2024-07-15 14:54:31.470569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a96fc00 len:0x10000 key:0x183b00 00:19:57.731 [2024-07-15 14:54:31.470579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.731 [2024-07-15 14:54:31.470592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a95fb80 len:0x10000 key:0x183b00 00:19:57.731 [2024-07-15 14:54:31.470602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.731 [2024-07-15 14:54:31.470615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a94fb00 len:0x10000 key:0x183b00 00:19:57.731 [2024-07-15 14:54:31.470626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.731 [2024-07-15 14:54:31.470639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a93fa80 len:0x10000 key:0x183b00 00:19:57.731 [2024-07-15 14:54:31.470649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.731 [2024-07-15 14:54:31.470663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a92fa00 len:0x10000 key:0x183b00 00:19:57.731 [2024-07-15 14:54:31.470674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.731 [2024-07-15 14:54:31.470687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a91f980 len:0x10000 key:0x183b00 00:19:57.731 [2024-07-15 14:54:31.470697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.731 [2024-07-15 14:54:31.470710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a90f900 len:0x10000 key:0x183b00 00:19:57.731 [2024-07-15 14:54:31.470720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.731 [2024-07-15 14:54:31.470734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8ff880 len:0x10000 key:0x183b00 00:19:57.731 [2024-07-15 14:54:31.470744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.731 [2024-07-15 14:54:31.470757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8ef800 len:0x10000 key:0x183b00 00:19:57.731 [2024-07-15 14:54:31.470768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.731 [2024-07-15 14:54:31.470780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8df780 len:0x10000 key:0x183b00 00:19:57.731 [2024-07-15 14:54:31.470791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.731 [2024-07-15 14:54:31.470809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8cf700 len:0x10000 key:0x183b00 00:19:57.731 [2024-07-15 14:54:31.470819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.731 [2024-07-15 14:54:31.470832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8bf680 len:0x10000 key:0x183b00 00:19:57.731 [2024-07-15 14:54:31.470843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.731 [2024-07-15 14:54:31.470856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8af600 len:0x10000 key:0x183b00 00:19:57.731 [2024-07-15 14:54:31.470868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.731 [2024-07-15 14:54:31.470880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a89f580 len:0x10000 key:0x183b00 00:19:57.731 [2024-07-15 14:54:31.470890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.731 [2024-07-15 14:54:31.470904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a88f500 len:0x10000 key:0x183b00 00:19:57.731 [2024-07-15 14:54:31.470914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.731 [2024-07-15 14:54:31.470927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a87f480 len:0x10000 key:0x183b00 00:19:57.731 [2024-07-15 14:54:31.470937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.731 [2024-07-15 14:54:31.470950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a86f400 len:0x10000 key:0x183b00 00:19:57.731 [2024-07-15 14:54:31.470961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.731 [2024-07-15 14:54:31.470974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a85f380 len:0x10000 key:0x183b00 00:19:57.731 [2024-07-15 14:54:31.470984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.731 [2024-07-15 14:54:31.470997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a84f300 len:0x10000 key:0x183b00 00:19:57.731 [2024-07-15 14:54:31.471007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.731 [2024-07-15 14:54:31.471021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a45f980 len:0x10000 key:0x183200 00:19:57.731 [2024-07-15 14:54:31.471031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.731 [2024-07-15 14:54:31.473171] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b806940 was disconnected and freed. reset controller. 00:19:57.731 [2024-07-15 14:54:31.473219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa8fb00 len:0x10000 key:0x183800 00:19:57.731 [2024-07-15 14:54:31.473242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.731 [2024-07-15 14:54:31.473281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa7fa80 len:0x10000 key:0x183800 00:19:57.731 [2024-07-15 14:54:31.473304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.731 [2024-07-15 14:54:31.473330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa6fa00 len:0x10000 key:0x183800 00:19:57.731 [2024-07-15 14:54:31.473354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.731 [2024-07-15 14:54:31.473368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa5f980 len:0x10000 key:0x183800 00:19:57.731 [2024-07-15 14:54:31.473378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.731 [2024-07-15 14:54:31.473391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa4f900 len:0x10000 key:0x183800 00:19:57.731 [2024-07-15 14:54:31.473402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.732 [2024-07-15 14:54:31.473415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa3f880 len:0x10000 key:0x183800 00:19:57.732 [2024-07-15 14:54:31.473425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.732 [2024-07-15 14:54:31.473437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa2f800 len:0x10000 key:0x183800 00:19:57.732 [2024-07-15 14:54:31.473448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.732 [2024-07-15 14:54:31.473461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa1f780 len:0x10000 key:0x183800 00:19:57.732 [2024-07-15 14:54:31.473471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.732 [2024-07-15 14:54:31.473484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa0f700 len:0x10000 key:0x183800 00:19:57.732 [2024-07-15 14:54:31.473495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.732 [2024-07-15 14:54:31.473508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a83f280 len:0x10000 key:0x183b00 00:19:57.732 [2024-07-15 14:54:31.473518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.732 [2024-07-15 14:54:31.473532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a82f200 len:0x10000 key:0x183b00 00:19:57.732 [2024-07-15 14:54:31.473590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.732 [2024-07-15 14:54:31.473605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a81f180 len:0x10000 key:0x183b00 00:19:57.732 [2024-07-15 14:54:31.473616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.732 [2024-07-15 14:54:31.473632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a80f100 len:0x10000 key:0x183b00 00:19:57.732 [2024-07-15 14:54:31.473642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.732 [2024-07-15 14:54:31.473655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001adf0000 len:0x10000 key:0x183700 00:19:57.732 [2024-07-15 14:54:31.473667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.732 [2024-07-15 14:54:31.473680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001addff80 len:0x10000 key:0x183700 00:19:57.732 [2024-07-15 14:54:31.473690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.732 [2024-07-15 14:54:31.473703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001adcff00 len:0x10000 key:0x183700 00:19:57.732 [2024-07-15 14:54:31.473713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.732 [2024-07-15 14:54:31.473726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001adbfe80 len:0x10000 key:0x183700 00:19:57.732 [2024-07-15 14:54:31.473736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.732 [2024-07-15 14:54:31.473749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001adafe00 len:0x10000 key:0x183700 00:19:57.732 [2024-07-15 14:54:31.473759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.732 [2024-07-15 14:54:31.473772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad9fd80 len:0x10000 key:0x183700 00:19:57.732 [2024-07-15 14:54:31.473783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.732 [2024-07-15 14:54:31.473795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad8fd00 len:0x10000 key:0x183700 00:19:57.732 [2024-07-15 14:54:31.473806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.732 [2024-07-15 14:54:31.473818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad7fc80 len:0x10000 key:0x183700 00:19:57.732 [2024-07-15 14:54:31.473829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.732 [2024-07-15 14:54:31.473841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad6fc00 len:0x10000 key:0x183700 00:19:57.732 [2024-07-15 14:54:31.473852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.732 [2024-07-15 14:54:31.473865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad5fb80 len:0x10000 key:0x183700 00:19:57.732 [2024-07-15 14:54:31.473875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.732 [2024-07-15 14:54:31.473888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad4fb00 len:0x10000 key:0x183700 00:19:57.732 [2024-07-15 14:54:31.473900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.732 [2024-07-15 14:54:31.473915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad3fa80 len:0x10000 key:0x183700 00:19:57.732 [2024-07-15 14:54:31.473925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.732 [2024-07-15 14:54:31.473938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad2fa00 len:0x10000 key:0x183700 00:19:57.732 [2024-07-15 14:54:31.473949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.732 [2024-07-15 14:54:31.473962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad1f980 len:0x10000 key:0x183700 00:19:57.732 [2024-07-15 14:54:31.473972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.732 [2024-07-15 14:54:31.473984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad0f900 len:0x10000 key:0x183700 00:19:57.732 [2024-07-15 14:54:31.473995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.732 [2024-07-15 14:54:31.474008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acff880 len:0x10000 key:0x183700 00:19:57.732 [2024-07-15 14:54:31.474019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.732 [2024-07-15 14:54:31.474031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acef800 len:0x10000 key:0x183700 00:19:57.732 [2024-07-15 14:54:31.474041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.732 [2024-07-15 14:54:31.474055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acdf780 len:0x10000 key:0x183700 00:19:57.732 [2024-07-15 14:54:31.474065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.732 [2024-07-15 14:54:31.474078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001accf700 len:0x10000 key:0x183700 00:19:57.732 [2024-07-15 14:54:31.474088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.732 [2024-07-15 14:54:31.474100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acbf680 len:0x10000 key:0x183700 00:19:57.732 [2024-07-15 14:54:31.474111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.732 [2024-07-15 14:54:31.474123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acaf600 len:0x10000 key:0x183700 00:19:57.732 [2024-07-15 14:54:31.474134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.732 [2024-07-15 14:54:31.474146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac9f580 len:0x10000 key:0x183700 00:19:57.732 [2024-07-15 14:54:31.474160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.732 [2024-07-15 14:54:31.474173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac8f500 len:0x10000 key:0x183700 00:19:57.732 [2024-07-15 14:54:31.474183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.732 [2024-07-15 14:54:31.474195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac7f480 len:0x10000 key:0x183700 00:19:57.732 [2024-07-15 14:54:31.474206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.732 [2024-07-15 14:54:31.474219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac6f400 len:0x10000 key:0x183700 00:19:57.732 [2024-07-15 14:54:31.474230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.732 [2024-07-15 14:54:31.474243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac5f380 len:0x10000 key:0x183700 00:19:57.732 [2024-07-15 14:54:31.474253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.732 [2024-07-15 14:54:31.474265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac4f300 len:0x10000 key:0x183700 00:19:57.732 [2024-07-15 14:54:31.474276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.732 [2024-07-15 14:54:31.474289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac3f280 len:0x10000 key:0x183700 00:19:57.732 [2024-07-15 14:54:31.474299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.732 [2024-07-15 14:54:31.474311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac2f200 len:0x10000 key:0x183700 00:19:57.732 [2024-07-15 14:54:31.474322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.732 [2024-07-15 14:54:31.474335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac1f180 len:0x10000 key:0x183700 00:19:57.732 [2024-07-15 14:54:31.474346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.732 [2024-07-15 14:54:31.474359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac0f100 len:0x10000 key:0x183700 00:19:57.733 [2024-07-15 14:54:31.474369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.733 [2024-07-15 14:54:31.474382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aff0000 len:0x10000 key:0x184100 00:19:57.733 [2024-07-15 14:54:31.474392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.733 [2024-07-15 14:54:31.474405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afdff80 len:0x10000 key:0x184100 00:19:57.733 [2024-07-15 14:54:31.474420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.733 [2024-07-15 14:54:31.474434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afcff00 len:0x10000 key:0x184100 00:19:57.733 [2024-07-15 14:54:31.474445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.733 [2024-07-15 14:54:31.474458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afbfe80 len:0x10000 key:0x184100 00:19:57.733 [2024-07-15 14:54:31.474468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.733 [2024-07-15 14:54:31.474481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afafe00 len:0x10000 key:0x184100 00:19:57.733 [2024-07-15 14:54:31.474491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.733 [2024-07-15 14:54:31.474504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af9fd80 len:0x10000 key:0x184100 00:19:57.733 [2024-07-15 14:54:31.474515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.733 [2024-07-15 14:54:31.474528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af8fd00 len:0x10000 key:0x184100 00:19:57.733 [2024-07-15 14:54:31.474545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.733 [2024-07-15 14:54:31.474559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af7fc80 len:0x10000 key:0x184100 00:19:57.733 [2024-07-15 14:54:31.474570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.733 [2024-07-15 14:54:31.474583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af6fc00 len:0x10000 key:0x184100 00:19:57.733 [2024-07-15 14:54:31.474595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.733 [2024-07-15 14:54:31.474608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af5fb80 len:0x10000 key:0x184100 00:19:57.733 [2024-07-15 14:54:31.474618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.733 [2024-07-15 14:54:31.474631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af4fb00 len:0x10000 key:0x184100 00:19:57.733 [2024-07-15 14:54:31.474641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.733 [2024-07-15 14:54:31.474654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af3fa80 len:0x10000 key:0x184100 00:19:57.733 [2024-07-15 14:54:31.474665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.733 [2024-07-15 14:54:31.474677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af2fa00 len:0x10000 key:0x184100 00:19:57.733 [2024-07-15 14:54:31.474688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.733 [2024-07-15 14:54:31.474703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af1f980 len:0x10000 key:0x184100 00:19:57.733 [2024-07-15 14:54:31.474713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.733 [2024-07-15 14:54:31.474727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aaefe00 len:0x10000 key:0x183800 00:19:57.733 [2024-07-15 14:54:31.474737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.733 [2024-07-15 14:54:31.474749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fd59000 len:0x10000 key:0x184400 00:19:57.733 [2024-07-15 14:54:31.474760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.733 [2024-07-15 14:54:31.474773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fd38000 len:0x10000 key:0x184400 00:19:57.733 [2024-07-15 14:54:31.474784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.733 [2024-07-15 14:54:31.474797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fd17000 len:0x10000 key:0x184400 00:19:57.733 [2024-07-15 14:54:31.474807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.733 [2024-07-15 14:54:31.474820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fcf6000 len:0x10000 key:0x184400 00:19:57.733 [2024-07-15 14:54:31.474830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.733 [2024-07-15 14:54:31.474843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fcd5000 len:0x10000 key:0x184400 00:19:57.733 [2024-07-15 14:54:31.474854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.733 [2024-07-15 14:54:31.477449] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b8066c0 was disconnected and freed. reset controller. 00:19:57.733 [2024-07-15 14:54:31.477490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1f0000 len:0x10000 key:0x184200 00:19:57.733 [2024-07-15 14:54:31.477504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.733 [2024-07-15 14:54:31.477524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1dff80 len:0x10000 key:0x184200 00:19:57.733 [2024-07-15 14:54:31.477535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.733 [2024-07-15 14:54:31.477559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1cff00 len:0x10000 key:0x184200 00:19:57.733 [2024-07-15 14:54:31.477570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.733 [2024-07-15 14:54:31.477583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1bfe80 len:0x10000 key:0x184200 00:19:57.733 [2024-07-15 14:54:31.477594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.733 [2024-07-15 14:54:31.477611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1afe00 len:0x10000 key:0x184200 00:19:57.733 [2024-07-15 14:54:31.477622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.733 [2024-07-15 14:54:31.477635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b19fd80 len:0x10000 key:0x184200 00:19:57.733 [2024-07-15 14:54:31.477645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.733 [2024-07-15 14:54:31.477658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b18fd00 len:0x10000 key:0x184200 00:19:57.733 [2024-07-15 14:54:31.477669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.733 [2024-07-15 14:54:31.477682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b17fc80 len:0x10000 key:0x184200 00:19:57.733 [2024-07-15 14:54:31.477692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.733 [2024-07-15 14:54:31.477706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b16fc00 len:0x10000 key:0x184200 00:19:57.733 [2024-07-15 14:54:31.477717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.733 [2024-07-15 14:54:31.477735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b15fb80 len:0x10000 key:0x184200 00:19:57.733 [2024-07-15 14:54:31.477745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.733 [2024-07-15 14:54:31.477761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b14fb00 len:0x10000 key:0x184200 00:19:57.733 [2024-07-15 14:54:31.477773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.733 [2024-07-15 14:54:31.477786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b13fa80 len:0x10000 key:0x184200 00:19:57.733 [2024-07-15 14:54:31.477796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.733 [2024-07-15 14:54:31.477809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b12fa00 len:0x10000 key:0x184200 00:19:57.733 [2024-07-15 14:54:31.477820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.733 [2024-07-15 14:54:31.477833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b11f980 len:0x10000 key:0x184200 00:19:57.733 [2024-07-15 14:54:31.477844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.733 [2024-07-15 14:54:31.477857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b10f900 len:0x10000 key:0x184200 00:19:57.733 [2024-07-15 14:54:31.477867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.733 [2024-07-15 14:54:31.477883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0ff880 len:0x10000 key:0x184200 00:19:57.733 [2024-07-15 14:54:31.477893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.733 [2024-07-15 14:54:31.477906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0ef800 len:0x10000 key:0x184200 00:19:57.733 [2024-07-15 14:54:31.477917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.733 [2024-07-15 14:54:31.477931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0df780 len:0x10000 key:0x184200 00:19:57.733 [2024-07-15 14:54:31.477941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.734 [2024-07-15 14:54:31.477954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0cf700 len:0x10000 key:0x184200 00:19:57.734 [2024-07-15 14:54:31.477964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.734 [2024-07-15 14:54:31.477978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0bf680 len:0x10000 key:0x184200 00:19:57.734 [2024-07-15 14:54:31.477988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.734 [2024-07-15 14:54:31.478001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0af600 len:0x10000 key:0x184200 00:19:57.734 [2024-07-15 14:54:31.478011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.734 [2024-07-15 14:54:31.478025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b09f580 len:0x10000 key:0x184200 00:19:57.734 [2024-07-15 14:54:31.478035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.734 [2024-07-15 14:54:31.478048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b08f500 len:0x10000 key:0x184200 00:19:57.734 [2024-07-15 14:54:31.478058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.734 [2024-07-15 14:54:31.478071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b07f480 len:0x10000 key:0x184200 00:19:57.734 [2024-07-15 14:54:31.478082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.734 [2024-07-15 14:54:31.478095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b06f400 len:0x10000 key:0x184200 00:19:57.734 [2024-07-15 14:54:31.478106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.734 [2024-07-15 14:54:31.478119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b05f380 len:0x10000 key:0x184200 00:19:57.734 [2024-07-15 14:54:31.478129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.734 [2024-07-15 14:54:31.478142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b04f300 len:0x10000 key:0x184200 00:19:57.734 [2024-07-15 14:54:31.478155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.734 [2024-07-15 14:54:31.478167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b03f280 len:0x10000 key:0x184200 00:19:57.734 [2024-07-15 14:54:31.478178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.734 [2024-07-15 14:54:31.478191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b02f200 len:0x10000 key:0x184200 00:19:57.734 [2024-07-15 14:54:31.478202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.734 [2024-07-15 14:54:31.478215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b01f180 len:0x10000 key:0x184200 00:19:57.734 [2024-07-15 14:54:31.478225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.734 [2024-07-15 14:54:31.478238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b00f100 len:0x10000 key:0x184200 00:19:57.734 [2024-07-15 14:54:31.478248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.734 [2024-07-15 14:54:31.478261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3f0000 len:0x10000 key:0x183500 00:19:57.734 [2024-07-15 14:54:31.478271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.734 [2024-07-15 14:54:31.478284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3dff80 len:0x10000 key:0x183500 00:19:57.734 [2024-07-15 14:54:31.478294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.734 [2024-07-15 14:54:31.478308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3cff00 len:0x10000 key:0x183500 00:19:57.734 [2024-07-15 14:54:31.478319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.734 [2024-07-15 14:54:31.478332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3bfe80 len:0x10000 key:0x183500 00:19:57.734 [2024-07-15 14:54:31.478342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.734 [2024-07-15 14:54:31.478355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3afe00 len:0x10000 key:0x183500 00:19:57.734 [2024-07-15 14:54:31.478366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.734 [2024-07-15 14:54:31.478379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b39fd80 len:0x10000 key:0x183500 00:19:57.734 [2024-07-15 14:54:31.478389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.734 [2024-07-15 14:54:31.478402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b38fd00 len:0x10000 key:0x183500 00:19:57.734 [2024-07-15 14:54:31.478415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.734 [2024-07-15 14:54:31.478428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b37fc80 len:0x10000 key:0x183500 00:19:57.734 [2024-07-15 14:54:31.478438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.734 [2024-07-15 14:54:31.478451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b36fc00 len:0x10000 key:0x183500 00:19:57.734 [2024-07-15 14:54:31.478461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.734 [2024-07-15 14:54:31.478475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b35fb80 len:0x10000 key:0x183500 00:19:57.734 [2024-07-15 14:54:31.478485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.734 [2024-07-15 14:54:31.478501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b34fb00 len:0x10000 key:0x183500 00:19:57.734 [2024-07-15 14:54:31.478511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.734 [2024-07-15 14:54:31.478525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b33fa80 len:0x10000 key:0x183500 00:19:57.734 [2024-07-15 14:54:31.478536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.734 [2024-07-15 14:54:31.478558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b32fa00 len:0x10000 key:0x183500 00:19:57.734 [2024-07-15 14:54:31.478569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.734 [2024-07-15 14:54:31.478582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b31f980 len:0x10000 key:0x183500 00:19:57.734 [2024-07-15 14:54:31.478593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.734 [2024-07-15 14:54:31.478606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b30f900 len:0x10000 key:0x183500 00:19:57.734 [2024-07-15 14:54:31.478617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.734 [2024-07-15 14:54:31.478629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2ff880 len:0x10000 key:0x183500 00:19:57.734 [2024-07-15 14:54:31.478640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.734 [2024-07-15 14:54:31.478653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2ef800 len:0x10000 key:0x183500 00:19:57.734 [2024-07-15 14:54:31.478664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.734 [2024-07-15 14:54:31.478676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2df780 len:0x10000 key:0x183500 00:19:57.734 [2024-07-15 14:54:31.478689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.734 [2024-07-15 14:54:31.478702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2cf700 len:0x10000 key:0x183500 00:19:57.735 [2024-07-15 14:54:31.478713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.735 [2024-07-15 14:54:31.478726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2bf680 len:0x10000 key:0x183500 00:19:57.735 [2024-07-15 14:54:31.478736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.735 [2024-07-15 14:54:31.478750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2af600 len:0x10000 key:0x183500 00:19:57.735 [2024-07-15 14:54:31.478761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.735 [2024-07-15 14:54:31.478774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b29f580 len:0x10000 key:0x183500 00:19:57.735 [2024-07-15 14:54:31.478784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.735 [2024-07-15 14:54:31.478796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b28f500 len:0x10000 key:0x183500 00:19:57.735 [2024-07-15 14:54:31.478807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.735 [2024-07-15 14:54:31.478820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b27f480 len:0x10000 key:0x183500 00:19:57.735 [2024-07-15 14:54:31.478830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.735 [2024-07-15 14:54:31.478843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b26f400 len:0x10000 key:0x183500 00:19:57.735 [2024-07-15 14:54:31.478853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.735 [2024-07-15 14:54:31.478867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b25f380 len:0x10000 key:0x183500 00:19:57.735 [2024-07-15 14:54:31.478878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.735 [2024-07-15 14:54:31.478890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b24f300 len:0x10000 key:0x183500 00:19:57.735 [2024-07-15 14:54:31.478900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.735 [2024-07-15 14:54:31.478914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b23f280 len:0x10000 key:0x183500 00:19:57.735 [2024-07-15 14:54:31.478924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.735 [2024-07-15 14:54:31.478937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b22f200 len:0x10000 key:0x183500 00:19:57.735 [2024-07-15 14:54:31.478947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.735 [2024-07-15 14:54:31.478962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b21f180 len:0x10000 key:0x183500 00:19:57.735 [2024-07-15 14:54:31.478972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.735 [2024-07-15 14:54:31.478986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b20f100 len:0x10000 key:0x183500 00:19:57.735 [2024-07-15 14:54:31.478996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.735 [2024-07-15 14:54:31.479009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b5f0000 len:0x10000 key:0x184000 00:19:57.735 [2024-07-15 14:54:31.479021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.735 [2024-07-15 14:54:31.479033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ae0f700 len:0x10000 key:0x184100 00:19:57.735 [2024-07-15 14:54:31.479044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.735 [2024-07-15 14:54:31.487805] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b806440 was disconnected and freed. reset controller. 00:19:57.735 [2024-07-15 14:54:31.487849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4cfd00 len:0x10000 key:0x184000 00:19:57.735 [2024-07-15 14:54:31.487864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.735 [2024-07-15 14:54:31.487884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4bfc80 len:0x10000 key:0x184000 00:19:57.735 [2024-07-15 14:54:31.487895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.735 [2024-07-15 14:54:31.487909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4afc00 len:0x10000 key:0x184000 00:19:57.735 [2024-07-15 14:54:31.487920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.735 [2024-07-15 14:54:31.487933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b49fb80 len:0x10000 key:0x184000 00:19:57.735 [2024-07-15 14:54:31.487944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.735 [2024-07-15 14:54:31.487958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b48fb00 len:0x10000 key:0x184000 00:19:57.735 [2024-07-15 14:54:31.487969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.735 [2024-07-15 14:54:31.487982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:25216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b47fa80 len:0x10000 key:0x184000 00:19:57.735 [2024-07-15 14:54:31.487992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.735 [2024-07-15 14:54:31.488006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b46fa00 len:0x10000 key:0x184000 00:19:57.735 [2024-07-15 14:54:31.488021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.735 [2024-07-15 14:54:31.488035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b45f980 len:0x10000 key:0x184000 00:19:57.735 [2024-07-15 14:54:31.488045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.735 [2024-07-15 14:54:31.488059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b44f900 len:0x10000 key:0x184000 00:19:57.735 [2024-07-15 14:54:31.488070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.735 [2024-07-15 14:54:31.488083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b43f880 len:0x10000 key:0x184000 00:19:57.735 [2024-07-15 14:54:31.488094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.735 [2024-07-15 14:54:31.488107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b42f800 len:0x10000 key:0x184000 00:19:57.735 [2024-07-15 14:54:31.488119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.735 [2024-07-15 14:54:31.488132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b41f780 len:0x10000 key:0x184000 00:19:57.735 [2024-07-15 14:54:31.488143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.735 [2024-07-15 14:54:31.488157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:26112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b40f700 len:0x10000 key:0x184000 00:19:57.735 [2024-07-15 14:54:31.488167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.735 [2024-07-15 14:54:31.488181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:26240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7f0000 len:0x10000 key:0x183c00 00:19:57.735 [2024-07-15 14:54:31.488191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.735 [2024-07-15 14:54:31.488204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:26368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7dff80 len:0x10000 key:0x183c00 00:19:57.735 [2024-07-15 14:54:31.488215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.735 [2024-07-15 14:54:31.488229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:26496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7cff00 len:0x10000 key:0x183c00 00:19:57.735 [2024-07-15 14:54:31.488239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.735 [2024-07-15 14:54:31.488253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:26624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7bfe80 len:0x10000 key:0x183c00 00:19:57.735 [2024-07-15 14:54:31.488264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.735 [2024-07-15 14:54:31.488277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:26752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7afe00 len:0x10000 key:0x183c00 00:19:57.735 [2024-07-15 14:54:31.488287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.735 [2024-07-15 14:54:31.488302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b79fd80 len:0x10000 key:0x183c00 00:19:57.735 [2024-07-15 14:54:31.488315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.735 [2024-07-15 14:54:31.488328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:27008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b78fd00 len:0x10000 key:0x183c00 00:19:57.735 [2024-07-15 14:54:31.488340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.735 [2024-07-15 14:54:31.488353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:27136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b77fc80 len:0x10000 key:0x183c00 00:19:57.735 [2024-07-15 14:54:31.488363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.735 [2024-07-15 14:54:31.488377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:27264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b76fc00 len:0x10000 key:0x183c00 00:19:57.735 [2024-07-15 14:54:31.488387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.735 [2024-07-15 14:54:31.488401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:27392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b75fb80 len:0x10000 key:0x183c00 00:19:57.735 [2024-07-15 14:54:31.488412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.735 [2024-07-15 14:54:31.488425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:27520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b74fb00 len:0x10000 key:0x183c00 00:19:57.735 [2024-07-15 14:54:31.488435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.736 [2024-07-15 14:54:31.488449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:27648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b73fa80 len:0x10000 key:0x183c00 00:19:57.736 [2024-07-15 14:54:31.488459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.736 [2024-07-15 14:54:31.488472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b72fa00 len:0x10000 key:0x183c00 00:19:57.736 [2024-07-15 14:54:31.488483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.736 [2024-07-15 14:54:31.488496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b71f980 len:0x10000 key:0x183c00 00:19:57.736 [2024-07-15 14:54:31.488507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.736 [2024-07-15 14:54:31.488520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:28032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b70f900 len:0x10000 key:0x183c00 00:19:57.736 [2024-07-15 14:54:31.488530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.736 [2024-07-15 14:54:31.488551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:28160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6ff880 len:0x10000 key:0x183c00 00:19:57.736 [2024-07-15 14:54:31.488563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.736 [2024-07-15 14:54:31.488577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:28288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6ef800 len:0x10000 key:0x183c00 00:19:57.736 [2024-07-15 14:54:31.488588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.736 [2024-07-15 14:54:31.488601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:28416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6df780 len:0x10000 key:0x183c00 00:19:57.736 [2024-07-15 14:54:31.488612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.736 [2024-07-15 14:54:31.488626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:28544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6cf700 len:0x10000 key:0x183c00 00:19:57.736 [2024-07-15 14:54:31.488637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.736 [2024-07-15 14:54:31.488651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:28672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6bf680 len:0x10000 key:0x183c00 00:19:57.736 [2024-07-15 14:54:31.488662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.736 [2024-07-15 14:54:31.488675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6af600 len:0x10000 key:0x183c00 00:19:57.736 [2024-07-15 14:54:31.488686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.736 [2024-07-15 14:54:31.488699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b69f580 len:0x10000 key:0x183c00 00:19:57.736 [2024-07-15 14:54:31.488709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.736 [2024-07-15 14:54:31.488722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:29056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b68f500 len:0x10000 key:0x183c00 00:19:57.736 [2024-07-15 14:54:31.488733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.736 [2024-07-15 14:54:31.488746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:29184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b67f480 len:0x10000 key:0x183c00 00:19:57.736 [2024-07-15 14:54:31.488757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.736 [2024-07-15 14:54:31.488770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:29312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b66f400 len:0x10000 key:0x183c00 00:19:57.736 [2024-07-15 14:54:31.488781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.736 [2024-07-15 14:54:31.488794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:29440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b65f380 len:0x10000 key:0x183c00 00:19:57.736 [2024-07-15 14:54:31.488804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.736 [2024-07-15 14:54:31.488817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:29568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b64f300 len:0x10000 key:0x183c00 00:19:57.736 [2024-07-15 14:54:31.488828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.736 [2024-07-15 14:54:31.488841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:29696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b63f280 len:0x10000 key:0x183c00 00:19:57.736 [2024-07-15 14:54:31.488853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.736 [2024-07-15 14:54:31.488866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b62f200 len:0x10000 key:0x183c00 00:19:57.736 [2024-07-15 14:54:31.488881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.736 [2024-07-15 14:54:31.488895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b61f180 len:0x10000 key:0x183c00 00:19:57.736 [2024-07-15 14:54:31.488906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.736 [2024-07-15 14:54:31.488919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:30080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b60f100 len:0x10000 key:0x183c00 00:19:57.736 [2024-07-15 14:54:31.488929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.736 [2024-07-15 14:54:31.488943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:30208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9f0000 len:0x10000 key:0x184500 00:19:57.736 [2024-07-15 14:54:31.488954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.736 [2024-07-15 14:54:31.488967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:30336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9dff80 len:0x10000 key:0x184500 00:19:57.736 [2024-07-15 14:54:31.488978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.736 [2024-07-15 14:54:31.488991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:30464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9cff00 len:0x10000 key:0x184500 00:19:57.736 [2024-07-15 14:54:31.489001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.736 [2024-07-15 14:54:31.489015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:30592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9bfe80 len:0x10000 key:0x184500 00:19:57.736 [2024-07-15 14:54:31.489026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.736 [2024-07-15 14:54:31.489040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:30720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9afe00 len:0x10000 key:0x184500 00:19:57.736 [2024-07-15 14:54:31.489051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.736 [2024-07-15 14:54:31.489065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b99fd80 len:0x10000 key:0x184500 00:19:57.736 [2024-07-15 14:54:31.489075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.736 [2024-07-15 14:54:31.489088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b98fd00 len:0x10000 key:0x184500 00:19:57.736 [2024-07-15 14:54:31.489099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.736 [2024-07-15 14:54:31.489112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:31104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b97fc80 len:0x10000 key:0x184500 00:19:57.736 [2024-07-15 14:54:31.489124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.736 [2024-07-15 14:54:31.489137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:31232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b96fc00 len:0x10000 key:0x184500 00:19:57.736 [2024-07-15 14:54:31.489148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.736 [2024-07-15 14:54:31.489161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:31360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b95fb80 len:0x10000 key:0x184500 00:19:57.736 [2024-07-15 14:54:31.489171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.736 [2024-07-15 14:54:31.489184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:31488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b94fb00 len:0x10000 key:0x184500 00:19:57.736 [2024-07-15 14:54:31.489195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.736 [2024-07-15 14:54:31.489208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:31616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b93fa80 len:0x10000 key:0x184500 00:19:57.736 [2024-07-15 14:54:31.489218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.736 [2024-07-15 14:54:31.489231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:31744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b92fa00 len:0x10000 key:0x184500 00:19:57.736 [2024-07-15 14:54:31.489241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.736 [2024-07-15 14:54:31.489255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b91f980 len:0x10000 key:0x184500 00:19:57.736 [2024-07-15 14:54:31.489265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.736 [2024-07-15 14:54:31.489278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:32000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b90f900 len:0x10000 key:0x184500 00:19:57.736 [2024-07-15 14:54:31.489288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.736 [2024-07-15 14:54:31.489301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:32128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8ff880 len:0x10000 key:0x184500 00:19:57.736 [2024-07-15 14:54:31.489311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.736 [2024-07-15 14:54:31.489324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8ef800 len:0x10000 key:0x184500 00:19:57.736 [2024-07-15 14:54:31.489334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.736 [2024-07-15 14:54:31.489347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8df780 len:0x10000 key:0x184500 00:19:57.736 [2024-07-15 14:54:31.489358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.736 [2024-07-15 14:54:31.489370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8cf700 len:0x10000 key:0x184500 00:19:57.736 [2024-07-15 14:54:31.489381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.737 [2024-07-15 14:54:31.489396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4dfd80 len:0x10000 key:0x184000 00:19:57.737 [2024-07-15 14:54:31.489406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:53dc8000 sqhd:52b0 p:0 m:0 dnr:0 00:19:57.737 [2024-07-15 14:54:31.492057] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b8061c0 was disconnected and freed. reset controller. 00:19:57.737 [2024-07-15 14:54:31.492140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.737 [2024-07-15 14:54:31.492154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.737 [2024-07-15 14:54:31.492166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.737 [2024-07-15 14:54:31.492178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.737 [2024-07-15 14:54:31.492189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.737 [2024-07-15 14:54:31.492200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.737 [2024-07-15 14:54:31.492211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.737 [2024-07-15 14:54:31.492222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.737 [2024-07-15 14:54:31.493984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:19:57.737 [2024-07-15 14:54:31.494003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:19:57.737 [2024-07-15 14:54:31.494013] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:57.737 [2024-07-15 14:54:31.494031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.737 [2024-07-15 14:54:31.494042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:30905 cdw0:0 sqhd:bd00 p:1 m:0 dnr:0 00:19:57.737 [2024-07-15 14:54:31.494054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.737 [2024-07-15 14:54:31.494070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:30905 cdw0:0 sqhd:bd00 p:1 m:0 dnr:0 00:19:57.737 [2024-07-15 14:54:31.494081] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.737 [2024-07-15 14:54:31.494092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:30905 cdw0:0 sqhd:bd00 p:1 m:0 dnr:0 00:19:57.737 [2024-07-15 14:54:31.494103] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.737 [2024-07-15 14:54:31.494113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:30905 cdw0:0 sqhd:bd00 p:1 m:0 dnr:0 00:19:57.737 [2024-07-15 14:54:31.495599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:19:57.737 [2024-07-15 14:54:31.495616] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:19:57.737 [2024-07-15 14:54:31.495625] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:57.737 [2024-07-15 14:54:31.495642] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.737 [2024-07-15 14:54:31.495657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:30905 cdw0:0 sqhd:bd00 p:1 m:0 dnr:0 00:19:57.737 [2024-07-15 14:54:31.495668] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.737 [2024-07-15 14:54:31.495687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:30905 cdw0:0 sqhd:bd00 p:1 m:0 dnr:0 00:19:57.737 [2024-07-15 14:54:31.495698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.737 [2024-07-15 14:54:31.495708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:30905 cdw0:0 sqhd:bd00 p:1 m:0 dnr:0 00:19:57.737 [2024-07-15 14:54:31.495719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.737 [2024-07-15 14:54:31.495729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:30905 cdw0:0 sqhd:bd00 p:1 m:0 dnr:0 00:19:57.737 [2024-07-15 14:54:31.497633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:19:57.737 [2024-07-15 14:54:31.497649] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:19:57.737 [2024-07-15 14:54:31.497658] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:57.737 [2024-07-15 14:54:31.497675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.737 [2024-07-15 14:54:31.497687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:30905 cdw0:0 sqhd:bd00 p:1 m:0 dnr:0 00:19:57.737 [2024-07-15 14:54:31.497698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.737 [2024-07-15 14:54:31.497708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:30905 cdw0:0 sqhd:bd00 p:1 m:0 dnr:0 00:19:57.737 [2024-07-15 14:54:31.497726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.737 [2024-07-15 14:54:31.497736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:30905 cdw0:0 sqhd:bd00 p:1 m:0 dnr:0 00:19:57.737 [2024-07-15 14:54:31.497748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.737 [2024-07-15 14:54:31.497758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:30905 cdw0:0 sqhd:bd00 p:1 m:0 dnr:0 00:19:57.737 [2024-07-15 14:54:31.499747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:19:57.737 [2024-07-15 14:54:31.499763] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:19:57.737 [2024-07-15 14:54:31.499773] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:57.737 [2024-07-15 14:54:31.499789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.737 [2024-07-15 14:54:31.499800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:30905 cdw0:0 sqhd:bd00 p:1 m:0 dnr:0 00:19:57.737 [2024-07-15 14:54:31.499811] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.737 [2024-07-15 14:54:31.499821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:30905 cdw0:0 sqhd:bd00 p:1 m:0 dnr:0 00:19:57.737 [2024-07-15 14:54:31.499832] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.737 [2024-07-15 14:54:31.499846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:30905 cdw0:0 sqhd:bd00 p:1 m:0 dnr:0 00:19:57.737 [2024-07-15 14:54:31.499857] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.737 [2024-07-15 14:54:31.499868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:30905 cdw0:0 sqhd:bd00 p:1 m:0 dnr:0 00:19:57.737 [2024-07-15 14:54:31.501593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:19:57.737 [2024-07-15 14:54:31.501609] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:19:57.737 [2024-07-15 14:54:31.501618] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:57.737 [2024-07-15 14:54:31.501635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.737 [2024-07-15 14:54:31.501646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:30905 cdw0:0 sqhd:bd00 p:1 m:0 dnr:0 00:19:57.737 [2024-07-15 14:54:31.501657] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.737 [2024-07-15 14:54:31.501667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:30905 cdw0:0 sqhd:bd00 p:1 m:0 dnr:0 00:19:57.737 [2024-07-15 14:54:31.501680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.737 [2024-07-15 14:54:31.501691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:30905 cdw0:0 sqhd:bd00 p:1 m:0 dnr:0 00:19:57.737 [2024-07-15 14:54:31.501702] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.737 [2024-07-15 14:54:31.501712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:30905 cdw0:0 sqhd:bd00 p:1 m:0 dnr:0 00:19:57.737 [2024-07-15 14:54:31.503386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:19:57.737 [2024-07-15 14:54:31.503401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:19:57.737 [2024-07-15 14:54:31.503410] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:57.737 [2024-07-15 14:54:31.503427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.737 [2024-07-15 14:54:31.503437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:30905 cdw0:0 sqhd:bd00 p:1 m:0 dnr:0 00:19:57.737 [2024-07-15 14:54:31.503448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.737 [2024-07-15 14:54:31.503459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:30905 cdw0:0 sqhd:bd00 p:1 m:0 dnr:0 00:19:57.737 [2024-07-15 14:54:31.503470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.737 [2024-07-15 14:54:31.503481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:30905 cdw0:0 sqhd:bd00 p:1 m:0 dnr:0 00:19:57.737 [2024-07-15 14:54:31.503491] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.737 [2024-07-15 14:54:31.503501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:30905 cdw0:0 sqhd:bd00 p:1 m:0 dnr:0 00:19:57.737 [2024-07-15 14:54:31.505294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:19:57.737 [2024-07-15 14:54:31.505314] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:19:57.737 [2024-07-15 14:54:31.505323] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:57.737 [2024-07-15 14:54:31.505343] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.737 [2024-07-15 14:54:31.505353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:30905 cdw0:0 sqhd:bd00 p:1 m:0 dnr:0 00:19:57.737 [2024-07-15 14:54:31.505365] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.737 [2024-07-15 14:54:31.505375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:30905 cdw0:0 sqhd:bd00 p:1 m:0 dnr:0 00:19:57.737 [2024-07-15 14:54:31.505388] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.737 [2024-07-15 14:54:31.505397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:30905 cdw0:0 sqhd:bd00 p:1 m:0 dnr:0 00:19:57.737 [2024-07-15 14:54:31.505408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.737 [2024-07-15 14:54:31.505418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:30905 cdw0:0 sqhd:bd00 p:1 m:0 dnr:0 00:19:57.738 [2024-07-15 14:54:31.507833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:19:57.738 [2024-07-15 14:54:31.507848] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:57.738 [2024-07-15 14:54:31.507857] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:57.738 [2024-07-15 14:54:31.507874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.738 [2024-07-15 14:54:31.507884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:30905 cdw0:0 sqhd:bd00 p:1 m:0 dnr:0 00:19:57.738 [2024-07-15 14:54:31.507895] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.738 [2024-07-15 14:54:31.507905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:30905 cdw0:0 sqhd:bd00 p:1 m:0 dnr:0 00:19:57.738 [2024-07-15 14:54:31.507916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.738 [2024-07-15 14:54:31.507927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:30905 cdw0:0 sqhd:bd00 p:1 m:0 dnr:0 00:19:57.738 [2024-07-15 14:54:31.507938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.738 [2024-07-15 14:54:31.507949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:30905 cdw0:0 sqhd:bd00 p:1 m:0 dnr:0 00:19:57.738 [2024-07-15 14:54:31.509508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:19:57.738 [2024-07-15 14:54:31.509523] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:19:57.738 [2024-07-15 14:54:31.509532] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:57.738 [2024-07-15 14:54:31.509556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.738 [2024-07-15 14:54:31.509567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:30905 cdw0:0 sqhd:bd00 p:1 m:0 dnr:0 00:19:57.738 [2024-07-15 14:54:31.509582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.738 [2024-07-15 14:54:31.509592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:30905 cdw0:0 sqhd:bd00 p:1 m:0 dnr:0 00:19:57.738 [2024-07-15 14:54:31.509604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.738 [2024-07-15 14:54:31.509614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:30905 cdw0:0 sqhd:bd00 p:1 m:0 dnr:0 00:19:57.738 [2024-07-15 14:54:31.509625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.738 [2024-07-15 14:54:31.509636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:30905 cdw0:0 sqhd:bd00 p:1 m:0 dnr:0 00:19:57.738 [2024-07-15 14:54:31.530724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:19:57.738 [2024-07-15 14:54:31.530743] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:19:57.738 [2024-07-15 14:54:31.530750] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:57.738 [2024-07-15 14:54:31.540805] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:57.738 [2024-07-15 14:54:31.540835] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:19:57.738 [2024-07-15 14:54:31.540845] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:19:57.738 [2024-07-15 14:54:31.540886] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:57.738 [2024-07-15 14:54:31.540900] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:57.738 [2024-07-15 14:54:31.540913] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:57.738 [2024-07-15 14:54:31.540924] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:57.738 [2024-07-15 14:54:31.540936] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:57.738 [2024-07-15 14:54:31.540952] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:57.738 [2024-07-15 14:54:31.540962] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:57.738 [2024-07-15 14:54:31.541057] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:19:57.739 [2024-07-15 14:54:31.541070] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:19:57.739 [2024-07-15 14:54:31.541079] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:19:57.739 [2024-07-15 14:54:31.541092] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:19:57.739 [2024-07-15 14:54:31.543632] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:19:57.739 task offset: 34688 on job bdev=Nvme1n1 fails 00:19:57.739 00:19:57.739 Latency(us) 00:19:57.739 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:57.739 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:57.739 Job: Nvme1n1 ended in about 1.88 seconds with error 00:19:57.739 Verification LBA range: start 0x0 length 0x400 00:19:57.739 Nvme1n1 : 1.88 135.98 8.50 34.00 0.00 374347.68 20472.20 1070546.16 00:19:57.739 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:57.739 Job: Nvme2n1 ended in about 1.88 seconds with error 00:19:57.739 Verification LBA range: start 0x0 length 0x400 00:19:57.739 Nvme2n1 : 1.88 135.91 8.49 33.98 0.00 371270.80 23842.62 1070546.16 00:19:57.739 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:57.739 Job: Nvme3n1 ended in about 1.88 seconds with error 00:19:57.739 Verification LBA range: start 0x0 length 0x400 00:19:57.739 Nvme3n1 : 1.88 139.02 8.69 33.96 0.00 361703.39 4400.27 1070546.16 00:19:57.739 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:57.739 Job: Nvme4n1 ended in about 1.89 seconds with error 00:19:57.739 Verification LBA range: start 0x0 length 0x400 00:19:57.739 Nvme4n1 : 1.89 152.73 9.55 33.94 0.00 332267.61 6366.35 1070546.16 00:19:57.739 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:57.739 Job: Nvme5n1 ended in about 1.89 seconds with error 00:19:57.739 Verification LBA range: start 0x0 length 0x400 00:19:57.739 Nvme5n1 : 1.89 135.69 8.48 33.92 0.00 362583.77 33204.91 1070546.16 00:19:57.739 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:57.739 Job: Nvme6n1 ended in about 1.89 seconds with error 00:19:57.739 Verification LBA range: start 0x0 length 0x400 00:19:57.739 Nvme6n1 : 1.89 138.80 8.67 33.90 0.00 353046.05 11921.31 1070546.16 00:19:57.739 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:57.739 Job: Nvme7n1 ended in about 1.89 seconds with error 00:19:57.739 Verification LBA range: start 0x0 length 0x400 00:19:57.739 Nvme7n1 : 1.89 135.55 8.47 33.89 0.00 356773.69 57172.36 1158426.82 00:19:57.739 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:57.739 Job: Nvme8n1 ended in about 1.89 seconds with error 00:19:57.739 Verification LBA range: start 0x0 length 0x400 00:19:57.739 Nvme8n1 : 1.89 135.48 8.47 33.87 0.00 353871.68 14854.83 1142448.52 00:19:57.739 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:57.739 Job: Nvme9n1 ended in about 1.89 seconds with error 00:19:57.739 Verification LBA range: start 0x0 length 0x400 00:19:57.739 Nvme9n1 : 1.89 135.41 8.46 33.85 0.00 350967.13 50181.85 1126470.22 00:19:57.739 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:57.739 Job: Nvme10n1 ended in about 1.89 seconds with error 00:19:57.739 Verification LBA range: start 0x0 length 0x400 00:19:57.739 Nvme10n1 : 1.89 101.51 6.34 33.84 0.00 433226.61 50431.51 1110491.92 00:19:57.739 =================================================================================================================== 00:19:57.739 Total : 1346.08 84.13 339.15 0.00 363274.61 4400.27 1158426.82 00:19:57.739 [2024-07-15 14:54:31.566537] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:57.739 [2024-07-15 14:54:31.566568] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:19:57.739 [2024-07-15 14:54:31.566582] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:19:57.739 [2024-07-15 14:54:31.574852] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:19:57.739 [2024-07-15 14:54:31.574874] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:19:57.740 [2024-07-15 14:54:31.574881] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:19:57.740 [2024-07-15 14:54:31.575010] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:19:57.740 [2024-07-15 14:54:31.575019] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:19:57.740 [2024-07-15 14:54:31.575025] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e5300 00:19:57.740 [2024-07-15 14:54:31.575113] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:19:57.740 [2024-07-15 14:54:31.575126] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:19:57.740 [2024-07-15 14:54:31.575132] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192d9c80 00:19:57.740 [2024-07-15 14:54:31.578231] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:19:57.740 [2024-07-15 14:54:31.578252] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:19:57.740 [2024-07-15 14:54:31.578260] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192d2900 00:19:57.740 [2024-07-15 14:54:31.578327] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:19:57.740 [2024-07-15 14:54:31.578338] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:19:57.740 [2024-07-15 14:54:31.578346] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192c6340 00:19:57.740 [2024-07-15 14:54:31.578438] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:19:57.740 [2024-07-15 14:54:31.578449] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:19:57.740 [2024-07-15 14:54:31.578457] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192c5040 00:19:57.740 [2024-07-15 14:54:31.578547] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:19:57.740 [2024-07-15 14:54:31.578559] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:19:57.740 [2024-07-15 14:54:31.578567] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192a8500 00:19:57.740 [2024-07-15 14:54:31.579261] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:19:57.740 [2024-07-15 14:54:31.579276] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:19:57.740 [2024-07-15 14:54:31.579284] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001929b1c0 00:19:57.740 [2024-07-15 14:54:31.579394] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:19:57.740 [2024-07-15 14:54:31.579406] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:19:57.740 [2024-07-15 14:54:31.579415] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001928e080 00:19:57.740 [2024-07-15 14:54:31.579524] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:19:57.740 [2024-07-15 14:54:31.579536] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:19:57.740 [2024-07-15 14:54:31.579554] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192bf1c0 00:19:57.998 14:54:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 2886174 00:19:57.998 14:54:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:19:57.998 14:54:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:19:57.998 14:54:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:19:57.998 14:54:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:57.998 14:54:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:19:57.998 14:54:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:57.998 14:54:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:19:57.998 14:54:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:19:57.998 14:54:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:19:57.998 14:54:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:19:57.998 14:54:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:57.998 14:54:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:19:57.998 rmmod nvme_rdma 00:19:57.998 rmmod nvme_fabrics 00:19:57.998 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 121: 2886174 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") -q 64 -o 65536 -w verify -t 10 00:19:58.257 14:54:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:58.257 14:54:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:19:58.257 14:54:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:19:58.257 14:54:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:19:58.257 14:54:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:58.257 14:54:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:19:58.257 00:19:58.257 real 0m5.108s 00:19:58.257 user 0m17.489s 00:19:58.257 sys 0m1.057s 00:19:58.257 14:54:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:58.257 14:54:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:58.257 ************************************ 00:19:58.257 END TEST nvmf_shutdown_tc3 00:19:58.257 ************************************ 00:19:58.257 14:54:31 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:19:58.257 14:54:31 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:19:58.257 00:19:58.257 real 0m22.823s 00:19:58.257 user 1m9.976s 00:19:58.257 sys 0m7.196s 00:19:58.257 14:54:31 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:58.257 14:54:31 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:58.257 ************************************ 00:19:58.257 END TEST nvmf_shutdown 00:19:58.257 ************************************ 00:19:58.257 14:54:31 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:19:58.257 14:54:31 nvmf_rdma -- nvmf/nvmf.sh@86 -- # timing_exit target 00:19:58.257 14:54:31 nvmf_rdma -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:58.257 14:54:31 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:19:58.257 14:54:32 nvmf_rdma -- nvmf/nvmf.sh@88 -- # timing_enter host 00:19:58.257 14:54:32 nvmf_rdma -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:58.257 14:54:32 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:19:58.257 14:54:32 nvmf_rdma -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:19:58.257 14:54:32 nvmf_rdma -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:19:58.257 14:54:32 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:58.257 14:54:32 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:58.257 14:54:32 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:19:58.257 ************************************ 00:19:58.257 START TEST nvmf_multicontroller 00:19:58.257 ************************************ 00:19:58.257 14:54:32 nvmf_rdma.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:19:58.257 * Looking for test storage... 00:19:58.257 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:19:58.257 14:54:32 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:58.257 14:54:32 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:19:58.257 14:54:32 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:58.257 14:54:32 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:58.257 14:54:32 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:58.257 14:54:32 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:58.257 14:54:32 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:58.257 14:54:32 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:58.257 14:54:32 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:58.257 14:54:32 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:58.257 14:54:32 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:58.257 14:54:32 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:58.257 14:54:32 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:19:58.257 14:54:32 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:19:58.257 14:54:32 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:58.257 14:54:32 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:58.257 14:54:32 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:58.257 14:54:32 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:58.257 14:54:32 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:58.257 14:54:32 nvmf_rdma.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:58.257 14:54:32 nvmf_rdma.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:58.257 14:54:32 nvmf_rdma.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:58.257 14:54:32 nvmf_rdma.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.257 14:54:32 nvmf_rdma.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.257 14:54:32 nvmf_rdma.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.257 14:54:32 nvmf_rdma.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:19:58.257 14:54:32 nvmf_rdma.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.257 14:54:32 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:19:58.257 14:54:32 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:58.257 14:54:32 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:58.257 14:54:32 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:58.257 14:54:32 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:58.257 14:54:32 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:58.257 14:54:32 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:58.257 14:54:32 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:58.257 14:54:32 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:58.257 14:54:32 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:58.257 14:54:32 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:58.257 14:54:32 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:19:58.257 14:54:32 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:19:58.257 14:54:32 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:58.257 14:54:32 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' rdma == rdma ']' 00:19:58.258 14:54:32 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@19 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:19:58.258 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:19:58.258 14:54:32 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@20 -- # exit 0 00:19:58.258 00:19:58.258 real 0m0.115s 00:19:58.258 user 0m0.061s 00:19:58.258 sys 0m0.063s 00:19:58.258 14:54:32 nvmf_rdma.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:58.258 14:54:32 nvmf_rdma.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:58.258 ************************************ 00:19:58.258 END TEST nvmf_multicontroller 00:19:58.258 ************************************ 00:19:58.515 14:54:32 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:19:58.515 14:54:32 nvmf_rdma -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:19:58.515 14:54:32 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:58.515 14:54:32 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:58.515 14:54:32 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:19:58.515 ************************************ 00:19:58.515 START TEST nvmf_aer 00:19:58.515 ************************************ 00:19:58.515 14:54:32 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:19:58.515 * Looking for test storage... 00:19:58.515 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:19:58.515 14:54:32 nvmf_rdma.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:58.515 14:54:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:19:58.515 14:54:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:58.515 14:54:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:58.515 14:54:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:58.515 14:54:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:58.516 14:54:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:58.516 14:54:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:58.516 14:54:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:58.516 14:54:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:58.516 14:54:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:58.516 14:54:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:58.516 14:54:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:19:58.516 14:54:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:19:58.516 14:54:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:58.516 14:54:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:58.516 14:54:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:58.516 14:54:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:58.516 14:54:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:58.516 14:54:32 nvmf_rdma.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:58.516 14:54:32 nvmf_rdma.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:58.516 14:54:32 nvmf_rdma.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:58.516 14:54:32 nvmf_rdma.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.516 14:54:32 nvmf_rdma.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.516 14:54:32 nvmf_rdma.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.516 14:54:32 nvmf_rdma.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:19:58.516 14:54:32 nvmf_rdma.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.516 14:54:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:19:58.516 14:54:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:58.516 14:54:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:58.516 14:54:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:58.516 14:54:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:58.516 14:54:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:58.516 14:54:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:58.516 14:54:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:58.516 14:54:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:58.516 14:54:32 nvmf_rdma.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:19:58.516 14:54:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:19:58.516 14:54:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:58.516 14:54:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:58.516 14:54:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:58.516 14:54:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:58.516 14:54:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:58.516 14:54:32 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:58.516 14:54:32 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:58.516 14:54:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:58.516 14:54:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:58.516 14:54:32 nvmf_rdma.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:19:58.516 14:54:32 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:03.790 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:03.790 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:20:03.790 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:03.790 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:03.790 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:03.790 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:03.790 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:03.790 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:20:03.790 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:03.790 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:20:03.790 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:20:03.790 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:20:03.790 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:20:03.790 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:20:03.790 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:20:03.790 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:03.790 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:03.790 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:03.790 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:03.790 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:03.790 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:03.790 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:03.790 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:03.790 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:03.790 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:03.790 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:03.790 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:03.790 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:20:03.790 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:20:03.790 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:20:03.791 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:20:03.791 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:20:03.791 Found net devices under 0000:da:00.0: mlx_0_0 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:20:03.791 Found net devices under 0000:da:00.1: mlx_0_1 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@420 -- # rdma_device_init 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@58 -- # uname 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@62 -- # modprobe ib_cm 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@63 -- # modprobe ib_core 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@64 -- # modprobe ib_umad 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@66 -- # modprobe iw_cm 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@502 -- # allocate_nic_ips 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@73 -- # get_rdma_if_list 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@104 -- # echo mlx_0_0 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@105 -- # continue 2 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@104 -- # echo mlx_0_1 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@105 -- # continue 2 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:20:03.791 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:03.791 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:20:03.791 altname enp218s0f0np0 00:20:03.791 altname ens818f0np0 00:20:03.791 inet 192.168.100.8/24 scope global mlx_0_0 00:20:03.791 valid_lft forever preferred_lft forever 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:20:03.791 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:03.791 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:20:03.791 altname enp218s0f1np1 00:20:03.791 altname ens818f1np1 00:20:03.791 inet 192.168.100.9/24 scope global mlx_0_1 00:20:03.791 valid_lft forever preferred_lft forever 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@86 -- # get_rdma_if_list 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@104 -- # echo mlx_0_0 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@105 -- # continue 2 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@104 -- # echo mlx_0_1 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@105 -- # continue 2 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:20:03.791 192.168.100.9' 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:20:03.791 192.168.100.9' 00:20:03.791 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@457 -- # head -n 1 00:20:04.049 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:04.049 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:20:04.049 192.168.100.9' 00:20:04.049 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@458 -- # tail -n +2 00:20:04.049 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@458 -- # head -n 1 00:20:04.049 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:04.049 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:20:04.049 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:04.049 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:20:04.049 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:20:04.049 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:20:04.049 14:54:37 nvmf_rdma.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:20:04.049 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:04.049 14:54:37 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:04.049 14:54:37 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:04.049 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:04.049 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=2889985 00:20:04.049 14:54:37 nvmf_rdma.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 2889985 00:20:04.049 14:54:37 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 2889985 ']' 00:20:04.049 14:54:37 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:04.049 14:54:37 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:04.049 14:54:37 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:04.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:04.049 14:54:37 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:04.049 14:54:37 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:04.049 [2024-07-15 14:54:37.774417] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:20:04.049 [2024-07-15 14:54:37.774460] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:04.049 EAL: No free 2048 kB hugepages reported on node 1 00:20:04.049 [2024-07-15 14:54:37.829965] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:04.049 [2024-07-15 14:54:37.910676] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:04.049 [2024-07-15 14:54:37.910710] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:04.049 [2024-07-15 14:54:37.910717] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:04.049 [2024-07-15 14:54:37.910723] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:04.049 [2024-07-15 14:54:37.910728] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:04.049 [2024-07-15 14:54:37.910769] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:04.049 [2024-07-15 14:54:37.910867] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:04.049 [2024-07-15 14:54:37.910952] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:04.049 [2024-07-15 14:54:37.910953] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:04.980 14:54:38 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:04.980 14:54:38 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:20:04.980 14:54:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:04.980 14:54:38 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:04.980 14:54:38 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:04.980 14:54:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:04.980 14:54:38 nvmf_rdma.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:20:04.980 14:54:38 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.980 14:54:38 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:04.980 [2024-07-15 14:54:38.676513] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1c50cc0/0x1c551b0) succeed. 00:20:04.980 [2024-07-15 14:54:38.685583] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1c52300/0x1c96840) succeed. 00:20:04.980 14:54:38 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.980 14:54:38 nvmf_rdma.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:20:04.980 14:54:38 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.980 14:54:38 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:04.980 Malloc0 00:20:04.980 14:54:38 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.980 14:54:38 nvmf_rdma.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:20:04.980 14:54:38 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.980 14:54:38 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:04.980 14:54:38 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.980 14:54:38 nvmf_rdma.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:04.980 14:54:38 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.980 14:54:38 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:04.980 14:54:38 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.980 14:54:38 nvmf_rdma.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:04.980 14:54:38 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.980 14:54:38 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:04.980 [2024-07-15 14:54:38.848941] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:04.980 14:54:38 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.980 14:54:38 nvmf_rdma.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:20:04.981 14:54:38 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.981 14:54:38 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:04.981 [ 00:20:04.981 { 00:20:04.981 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:04.981 "subtype": "Discovery", 00:20:04.981 "listen_addresses": [], 00:20:04.981 "allow_any_host": true, 00:20:04.981 "hosts": [] 00:20:04.981 }, 00:20:04.981 { 00:20:04.981 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:04.981 "subtype": "NVMe", 00:20:04.981 "listen_addresses": [ 00:20:04.981 { 00:20:04.981 "trtype": "RDMA", 00:20:04.981 "adrfam": "IPv4", 00:20:04.981 "traddr": "192.168.100.8", 00:20:04.981 "trsvcid": "4420" 00:20:04.981 } 00:20:04.981 ], 00:20:04.981 "allow_any_host": true, 00:20:04.981 "hosts": [], 00:20:04.981 "serial_number": "SPDK00000000000001", 00:20:04.981 "model_number": "SPDK bdev Controller", 00:20:04.981 "max_namespaces": 2, 00:20:04.981 "min_cntlid": 1, 00:20:04.981 "max_cntlid": 65519, 00:20:04.981 "namespaces": [ 00:20:04.981 { 00:20:04.981 "nsid": 1, 00:20:04.981 "bdev_name": "Malloc0", 00:20:04.981 "name": "Malloc0", 00:20:04.981 "nguid": "54166792D8A642A3BB3EC939DD6BF3AC", 00:20:04.981 "uuid": "54166792-d8a6-42a3-bb3e-c939dd6bf3ac" 00:20:04.981 } 00:20:04.981 ] 00:20:04.981 } 00:20:04.981 ] 00:20:04.981 14:54:38 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.981 14:54:38 nvmf_rdma.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:20:04.981 14:54:38 nvmf_rdma.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:20:04.981 14:54:38 nvmf_rdma.nvmf_aer -- host/aer.sh@33 -- # aerpid=2890234 00:20:04.981 14:54:38 nvmf_rdma.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:20:04.981 14:54:38 nvmf_rdma.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:20:04.981 14:54:38 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:20:04.981 14:54:38 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:04.981 14:54:38 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:20:04.981 14:54:38 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:20:04.981 14:54:38 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:20:05.239 EAL: No free 2048 kB hugepages reported on node 1 00:20:05.239 14:54:38 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:05.239 14:54:38 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:20:05.239 14:54:38 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:20:05.239 14:54:38 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:20:05.239 14:54:39 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:05.239 14:54:39 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:05.239 14:54:39 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:20:05.239 14:54:39 nvmf_rdma.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:20:05.239 14:54:39 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.239 14:54:39 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:05.239 Malloc1 00:20:05.239 14:54:39 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.239 14:54:39 nvmf_rdma.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:20:05.239 14:54:39 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.239 14:54:39 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:05.239 14:54:39 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.239 14:54:39 nvmf_rdma.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:20:05.239 14:54:39 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.239 14:54:39 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:05.239 [ 00:20:05.239 { 00:20:05.239 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:05.239 "subtype": "Discovery", 00:20:05.239 "listen_addresses": [], 00:20:05.239 "allow_any_host": true, 00:20:05.239 "hosts": [] 00:20:05.239 }, 00:20:05.239 { 00:20:05.239 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:05.239 "subtype": "NVMe", 00:20:05.239 "listen_addresses": [ 00:20:05.239 { 00:20:05.239 "trtype": "RDMA", 00:20:05.239 "adrfam": "IPv4", 00:20:05.239 "traddr": "192.168.100.8", 00:20:05.239 "trsvcid": "4420" 00:20:05.239 } 00:20:05.239 ], 00:20:05.239 "allow_any_host": true, 00:20:05.239 "hosts": [], 00:20:05.239 "serial_number": "SPDK00000000000001", 00:20:05.239 "model_number": "SPDK bdev Controller", 00:20:05.239 "max_namespaces": 2, 00:20:05.239 "min_cntlid": 1, 00:20:05.239 "max_cntlid": 65519, 00:20:05.239 "namespaces": [ 00:20:05.239 { 00:20:05.239 "nsid": 1, 00:20:05.239 "bdev_name": "Malloc0", 00:20:05.239 "name": "Malloc0", 00:20:05.239 "nguid": "54166792D8A642A3BB3EC939DD6BF3AC", 00:20:05.239 "uuid": "54166792-d8a6-42a3-bb3e-c939dd6bf3ac" 00:20:05.239 }, 00:20:05.239 { 00:20:05.239 "nsid": 2, 00:20:05.239 "bdev_name": "Malloc1", 00:20:05.239 "name": "Malloc1", 00:20:05.239 "nguid": "C70E1467CE3340C18941F334EB679481", 00:20:05.239 "uuid": "c70e1467-ce33-40c1-8941-f334eb679481" 00:20:05.239 } 00:20:05.239 ] 00:20:05.239 } 00:20:05.239 ] 00:20:05.239 14:54:39 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.239 14:54:39 nvmf_rdma.nvmf_aer -- host/aer.sh@43 -- # wait 2890234 00:20:05.498 Asynchronous Event Request test 00:20:05.498 Attaching to 192.168.100.8 00:20:05.498 Attached to 192.168.100.8 00:20:05.498 Registering asynchronous event callbacks... 00:20:05.498 Starting namespace attribute notice tests for all controllers... 00:20:05.498 192.168.100.8: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:20:05.498 aer_cb - Changed Namespace 00:20:05.498 Cleaning up... 00:20:05.498 14:54:39 nvmf_rdma.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:20:05.498 14:54:39 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.498 14:54:39 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:05.498 14:54:39 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.498 14:54:39 nvmf_rdma.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:20:05.498 14:54:39 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.498 14:54:39 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:05.498 14:54:39 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.498 14:54:39 nvmf_rdma.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:05.498 14:54:39 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.498 14:54:39 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:05.498 14:54:39 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.498 14:54:39 nvmf_rdma.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:20:05.498 14:54:39 nvmf_rdma.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:20:05.498 14:54:39 nvmf_rdma.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:05.498 14:54:39 nvmf_rdma.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:20:05.498 14:54:39 nvmf_rdma.nvmf_aer -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:20:05.498 14:54:39 nvmf_rdma.nvmf_aer -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:20:05.498 14:54:39 nvmf_rdma.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:20:05.498 14:54:39 nvmf_rdma.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:05.498 14:54:39 nvmf_rdma.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:20:05.498 rmmod nvme_rdma 00:20:05.498 rmmod nvme_fabrics 00:20:05.498 14:54:39 nvmf_rdma.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:05.498 14:54:39 nvmf_rdma.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:20:05.498 14:54:39 nvmf_rdma.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:20:05.498 14:54:39 nvmf_rdma.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 2889985 ']' 00:20:05.498 14:54:39 nvmf_rdma.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 2889985 00:20:05.498 14:54:39 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 2889985 ']' 00:20:05.498 14:54:39 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 2889985 00:20:05.498 14:54:39 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:20:05.498 14:54:39 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:05.498 14:54:39 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2889985 00:20:05.498 14:54:39 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:05.498 14:54:39 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:05.498 14:54:39 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2889985' 00:20:05.498 killing process with pid 2889985 00:20:05.498 14:54:39 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@967 -- # kill 2889985 00:20:05.498 14:54:39 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@972 -- # wait 2889985 00:20:05.757 14:54:39 nvmf_rdma.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:05.757 14:54:39 nvmf_rdma.nvmf_aer -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:20:05.757 00:20:05.757 real 0m7.342s 00:20:05.757 user 0m8.130s 00:20:05.757 sys 0m4.535s 00:20:05.757 14:54:39 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:05.757 14:54:39 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:05.757 ************************************ 00:20:05.757 END TEST nvmf_aer 00:20:05.757 ************************************ 00:20:05.757 14:54:39 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:20:05.757 14:54:39 nvmf_rdma -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:20:05.757 14:54:39 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:05.757 14:54:39 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:05.757 14:54:39 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:20:05.757 ************************************ 00:20:05.757 START TEST nvmf_async_init 00:20:05.757 ************************************ 00:20:05.757 14:54:39 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:20:06.016 * Looking for test storage... 00:20:06.016 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:20:06.016 14:54:39 nvmf_rdma.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:06.016 14:54:39 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:20:06.016 14:54:39 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:06.016 14:54:39 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:06.016 14:54:39 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:06.016 14:54:39 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:06.016 14:54:39 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:06.016 14:54:39 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:06.016 14:54:39 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:06.016 14:54:39 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:06.016 14:54:39 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:06.016 14:54:39 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:06.016 14:54:39 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:20:06.016 14:54:39 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:20:06.016 14:54:39 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:06.016 14:54:39 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:06.016 14:54:39 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:06.016 14:54:39 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:06.016 14:54:39 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:06.016 14:54:39 nvmf_rdma.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:06.016 14:54:39 nvmf_rdma.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:06.016 14:54:39 nvmf_rdma.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:06.016 14:54:39 nvmf_rdma.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.016 14:54:39 nvmf_rdma.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.016 14:54:39 nvmf_rdma.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.016 14:54:39 nvmf_rdma.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:20:06.017 14:54:39 nvmf_rdma.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.017 14:54:39 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:20:06.017 14:54:39 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:06.017 14:54:39 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:06.017 14:54:39 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:06.017 14:54:39 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:06.017 14:54:39 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:06.017 14:54:39 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:06.017 14:54:39 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:06.017 14:54:39 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:06.017 14:54:39 nvmf_rdma.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:20:06.017 14:54:39 nvmf_rdma.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:20:06.017 14:54:39 nvmf_rdma.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:20:06.017 14:54:39 nvmf_rdma.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:20:06.017 14:54:39 nvmf_rdma.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:20:06.017 14:54:39 nvmf_rdma.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:20:06.017 14:54:39 nvmf_rdma.nvmf_async_init -- host/async_init.sh@20 -- # nguid=77cb1106e2d243a7bd354765f02e60d9 00:20:06.017 14:54:39 nvmf_rdma.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:20:06.017 14:54:39 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:20:06.017 14:54:39 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:06.017 14:54:39 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:06.017 14:54:39 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:06.017 14:54:39 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:06.017 14:54:39 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:06.017 14:54:39 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:06.017 14:54:39 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:06.017 14:54:39 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:06.017 14:54:39 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:06.017 14:54:39 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:20:06.017 14:54:39 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:11.278 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:11.278 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:20:11.278 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:11.278 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:11.278 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:11.278 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:11.278 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:11.278 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:20:11.278 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:11.278 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:20:11.278 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:20:11.278 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:20:11.278 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:20:11.278 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:20:11.278 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:20:11.278 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:11.278 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:11.278 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:11.278 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:11.278 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:11.278 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:11.278 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:11.278 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:11.278 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:11.278 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:11.278 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:11.278 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:11.278 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:20:11.278 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:20:11.278 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:20:11.278 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:20:11.278 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:20:11.278 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:11.278 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:11.278 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:20:11.278 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:20:11.278 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:20:11.278 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:20:11.278 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:11.278 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:11.278 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:20:11.278 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:20:11.278 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:11.278 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:20:11.278 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:20:11.278 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:20:11.278 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:20:11.278 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:11.278 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:11.278 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:20:11.278 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:20:11.278 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:11.278 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:20:11.278 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:11.278 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:11.278 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:20:11.279 Found net devices under 0000:da:00.0: mlx_0_0 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:20:11.279 Found net devices under 0000:da:00.1: mlx_0_1 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@420 -- # rdma_device_init 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@58 -- # uname 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@62 -- # modprobe ib_cm 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@63 -- # modprobe ib_core 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@64 -- # modprobe ib_umad 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@66 -- # modprobe iw_cm 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@502 -- # allocate_nic_ips 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@73 -- # get_rdma_if_list 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@104 -- # echo mlx_0_0 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@105 -- # continue 2 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@104 -- # echo mlx_0_1 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@105 -- # continue 2 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:20:11.279 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:11.279 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:20:11.279 altname enp218s0f0np0 00:20:11.279 altname ens818f0np0 00:20:11.279 inet 192.168.100.8/24 scope global mlx_0_0 00:20:11.279 valid_lft forever preferred_lft forever 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:20:11.279 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:11.279 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:20:11.279 altname enp218s0f1np1 00:20:11.279 altname ens818f1np1 00:20:11.279 inet 192.168.100.9/24 scope global mlx_0_1 00:20:11.279 valid_lft forever preferred_lft forever 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@86 -- # get_rdma_if_list 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@104 -- # echo mlx_0_0 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@105 -- # continue 2 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@104 -- # echo mlx_0_1 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@105 -- # continue 2 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:11.279 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:20:11.279 192.168.100.9' 00:20:11.280 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:20:11.280 192.168.100.9' 00:20:11.280 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@457 -- # head -n 1 00:20:11.280 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:11.280 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@458 -- # tail -n +2 00:20:11.280 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:20:11.280 192.168.100.9' 00:20:11.280 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@458 -- # head -n 1 00:20:11.280 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:11.280 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:20:11.280 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:11.280 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:20:11.280 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:20:11.280 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:20:11.280 14:54:44 nvmf_rdma.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:20:11.280 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:11.280 14:54:44 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:11.280 14:54:44 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:11.280 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=2893293 00:20:11.280 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:11.280 14:54:44 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 2893293 00:20:11.280 14:54:44 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 2893293 ']' 00:20:11.280 14:54:44 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:11.280 14:54:44 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:11.280 14:54:44 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:11.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:11.280 14:54:44 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:11.280 14:54:44 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:11.280 [2024-07-15 14:54:44.852606] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:20:11.280 [2024-07-15 14:54:44.852652] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:11.280 EAL: No free 2048 kB hugepages reported on node 1 00:20:11.280 [2024-07-15 14:54:44.907324] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:11.280 [2024-07-15 14:54:44.986722] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:11.280 [2024-07-15 14:54:44.986755] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:11.280 [2024-07-15 14:54:44.986762] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:11.280 [2024-07-15 14:54:44.986768] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:11.280 [2024-07-15 14:54:44.986784] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:11.280 [2024-07-15 14:54:44.986800] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:11.844 14:54:45 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:11.844 14:54:45 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:20:11.844 14:54:45 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:11.844 14:54:45 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:11.844 14:54:45 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:11.844 14:54:45 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:11.844 14:54:45 nvmf_rdma.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:20:11.844 14:54:45 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.844 14:54:45 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:11.844 [2024-07-15 14:54:45.714139] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x8df910/0x8e3e00) succeed. 00:20:11.844 [2024-07-15 14:54:45.723190] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x8e0e10/0x925490) succeed. 00:20:12.102 14:54:45 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.102 14:54:45 nvmf_rdma.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:20:12.102 14:54:45 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.102 14:54:45 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:12.102 null0 00:20:12.102 14:54:45 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.102 14:54:45 nvmf_rdma.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:20:12.102 14:54:45 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.102 14:54:45 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:12.102 14:54:45 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.102 14:54:45 nvmf_rdma.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:20:12.102 14:54:45 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.102 14:54:45 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:12.102 14:54:45 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.102 14:54:45 nvmf_rdma.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 77cb1106e2d243a7bd354765f02e60d9 00:20:12.102 14:54:45 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.102 14:54:45 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:12.102 14:54:45 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.102 14:54:45 nvmf_rdma.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:20:12.102 14:54:45 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.102 14:54:45 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:12.102 [2024-07-15 14:54:45.811163] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:12.102 14:54:45 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.102 14:54:45 nvmf_rdma.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:20:12.102 14:54:45 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.102 14:54:45 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:12.102 nvme0n1 00:20:12.102 14:54:45 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.102 14:54:45 nvmf_rdma.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:12.102 14:54:45 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.102 14:54:45 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:12.102 [ 00:20:12.102 { 00:20:12.102 "name": "nvme0n1", 00:20:12.102 "aliases": [ 00:20:12.102 "77cb1106-e2d2-43a7-bd35-4765f02e60d9" 00:20:12.102 ], 00:20:12.102 "product_name": "NVMe disk", 00:20:12.102 "block_size": 512, 00:20:12.102 "num_blocks": 2097152, 00:20:12.102 "uuid": "77cb1106-e2d2-43a7-bd35-4765f02e60d9", 00:20:12.102 "assigned_rate_limits": { 00:20:12.102 "rw_ios_per_sec": 0, 00:20:12.102 "rw_mbytes_per_sec": 0, 00:20:12.102 "r_mbytes_per_sec": 0, 00:20:12.102 "w_mbytes_per_sec": 0 00:20:12.102 }, 00:20:12.102 "claimed": false, 00:20:12.102 "zoned": false, 00:20:12.102 "supported_io_types": { 00:20:12.102 "read": true, 00:20:12.102 "write": true, 00:20:12.102 "unmap": false, 00:20:12.102 "flush": true, 00:20:12.102 "reset": true, 00:20:12.102 "nvme_admin": true, 00:20:12.102 "nvme_io": true, 00:20:12.102 "nvme_io_md": false, 00:20:12.102 "write_zeroes": true, 00:20:12.102 "zcopy": false, 00:20:12.102 "get_zone_info": false, 00:20:12.102 "zone_management": false, 00:20:12.102 "zone_append": false, 00:20:12.102 "compare": true, 00:20:12.102 "compare_and_write": true, 00:20:12.102 "abort": true, 00:20:12.102 "seek_hole": false, 00:20:12.102 "seek_data": false, 00:20:12.102 "copy": true, 00:20:12.102 "nvme_iov_md": false 00:20:12.102 }, 00:20:12.102 "memory_domains": [ 00:20:12.102 { 00:20:12.102 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:20:12.102 "dma_device_type": 0 00:20:12.102 } 00:20:12.102 ], 00:20:12.102 "driver_specific": { 00:20:12.102 "nvme": [ 00:20:12.102 { 00:20:12.102 "trid": { 00:20:12.102 "trtype": "RDMA", 00:20:12.102 "adrfam": "IPv4", 00:20:12.102 "traddr": "192.168.100.8", 00:20:12.102 "trsvcid": "4420", 00:20:12.102 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:12.102 }, 00:20:12.102 "ctrlr_data": { 00:20:12.102 "cntlid": 1, 00:20:12.102 "vendor_id": "0x8086", 00:20:12.102 "model_number": "SPDK bdev Controller", 00:20:12.102 "serial_number": "00000000000000000000", 00:20:12.102 "firmware_revision": "24.09", 00:20:12.102 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:12.102 "oacs": { 00:20:12.102 "security": 0, 00:20:12.102 "format": 0, 00:20:12.102 "firmware": 0, 00:20:12.102 "ns_manage": 0 00:20:12.102 }, 00:20:12.102 "multi_ctrlr": true, 00:20:12.102 "ana_reporting": false 00:20:12.102 }, 00:20:12.102 "vs": { 00:20:12.102 "nvme_version": "1.3" 00:20:12.102 }, 00:20:12.102 "ns_data": { 00:20:12.102 "id": 1, 00:20:12.102 "can_share": true 00:20:12.102 } 00:20:12.102 } 00:20:12.102 ], 00:20:12.102 "mp_policy": "active_passive" 00:20:12.102 } 00:20:12.102 } 00:20:12.102 ] 00:20:12.102 14:54:45 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.102 14:54:45 nvmf_rdma.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:20:12.102 14:54:45 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.102 14:54:45 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:12.102 [2024-07-15 14:54:45.925371] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:12.102 [2024-07-15 14:54:45.949624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:20:12.102 [2024-07-15 14:54:45.973049] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:12.102 14:54:45 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.102 14:54:45 nvmf_rdma.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:12.102 14:54:45 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.102 14:54:45 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:12.102 [ 00:20:12.102 { 00:20:12.102 "name": "nvme0n1", 00:20:12.102 "aliases": [ 00:20:12.102 "77cb1106-e2d2-43a7-bd35-4765f02e60d9" 00:20:12.102 ], 00:20:12.102 "product_name": "NVMe disk", 00:20:12.102 "block_size": 512, 00:20:12.102 "num_blocks": 2097152, 00:20:12.102 "uuid": "77cb1106-e2d2-43a7-bd35-4765f02e60d9", 00:20:12.102 "assigned_rate_limits": { 00:20:12.102 "rw_ios_per_sec": 0, 00:20:12.102 "rw_mbytes_per_sec": 0, 00:20:12.102 "r_mbytes_per_sec": 0, 00:20:12.102 "w_mbytes_per_sec": 0 00:20:12.102 }, 00:20:12.102 "claimed": false, 00:20:12.103 "zoned": false, 00:20:12.103 "supported_io_types": { 00:20:12.103 "read": true, 00:20:12.103 "write": true, 00:20:12.103 "unmap": false, 00:20:12.103 "flush": true, 00:20:12.103 "reset": true, 00:20:12.103 "nvme_admin": true, 00:20:12.103 "nvme_io": true, 00:20:12.103 "nvme_io_md": false, 00:20:12.103 "write_zeroes": true, 00:20:12.103 "zcopy": false, 00:20:12.103 "get_zone_info": false, 00:20:12.103 "zone_management": false, 00:20:12.103 "zone_append": false, 00:20:12.103 "compare": true, 00:20:12.103 "compare_and_write": true, 00:20:12.103 "abort": true, 00:20:12.103 "seek_hole": false, 00:20:12.103 "seek_data": false, 00:20:12.103 "copy": true, 00:20:12.103 "nvme_iov_md": false 00:20:12.103 }, 00:20:12.103 "memory_domains": [ 00:20:12.103 { 00:20:12.103 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:20:12.103 "dma_device_type": 0 00:20:12.103 } 00:20:12.103 ], 00:20:12.103 "driver_specific": { 00:20:12.103 "nvme": [ 00:20:12.103 { 00:20:12.103 "trid": { 00:20:12.103 "trtype": "RDMA", 00:20:12.103 "adrfam": "IPv4", 00:20:12.103 "traddr": "192.168.100.8", 00:20:12.103 "trsvcid": "4420", 00:20:12.103 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:12.103 }, 00:20:12.103 "ctrlr_data": { 00:20:12.103 "cntlid": 2, 00:20:12.103 "vendor_id": "0x8086", 00:20:12.103 "model_number": "SPDK bdev Controller", 00:20:12.103 "serial_number": "00000000000000000000", 00:20:12.103 "firmware_revision": "24.09", 00:20:12.103 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:12.103 "oacs": { 00:20:12.103 "security": 0, 00:20:12.103 "format": 0, 00:20:12.103 "firmware": 0, 00:20:12.103 "ns_manage": 0 00:20:12.103 }, 00:20:12.103 "multi_ctrlr": true, 00:20:12.103 "ana_reporting": false 00:20:12.103 }, 00:20:12.103 "vs": { 00:20:12.103 "nvme_version": "1.3" 00:20:12.103 }, 00:20:12.103 "ns_data": { 00:20:12.103 "id": 1, 00:20:12.103 "can_share": true 00:20:12.103 } 00:20:12.103 } 00:20:12.103 ], 00:20:12.103 "mp_policy": "active_passive" 00:20:12.103 } 00:20:12.103 } 00:20:12.103 ] 00:20:12.103 14:54:45 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.103 14:54:45 nvmf_rdma.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:12.103 14:54:45 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.103 14:54:45 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:12.103 14:54:46 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.361 14:54:46 nvmf_rdma.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:20:12.361 14:54:46 nvmf_rdma.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.RqQrXav7q3 00:20:12.361 14:54:46 nvmf_rdma.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:12.361 14:54:46 nvmf_rdma.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.RqQrXav7q3 00:20:12.361 14:54:46 nvmf_rdma.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:20:12.361 14:54:46 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.361 14:54:46 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:12.361 14:54:46 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.361 14:54:46 nvmf_rdma.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4421 --secure-channel 00:20:12.361 14:54:46 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.361 14:54:46 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:12.361 [2024-07-15 14:54:46.044293] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:20:12.361 14:54:46 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.361 14:54:46 nvmf_rdma.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.RqQrXav7q3 00:20:12.361 14:54:46 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.361 14:54:46 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:12.361 14:54:46 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.361 14:54:46 nvmf_rdma.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.RqQrXav7q3 00:20:12.361 14:54:46 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.361 14:54:46 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:12.361 [2024-07-15 14:54:46.064341] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:12.361 nvme0n1 00:20:12.361 14:54:46 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.361 14:54:46 nvmf_rdma.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:12.361 14:54:46 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.361 14:54:46 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:12.361 [ 00:20:12.361 { 00:20:12.361 "name": "nvme0n1", 00:20:12.361 "aliases": [ 00:20:12.361 "77cb1106-e2d2-43a7-bd35-4765f02e60d9" 00:20:12.361 ], 00:20:12.361 "product_name": "NVMe disk", 00:20:12.361 "block_size": 512, 00:20:12.361 "num_blocks": 2097152, 00:20:12.361 "uuid": "77cb1106-e2d2-43a7-bd35-4765f02e60d9", 00:20:12.361 "assigned_rate_limits": { 00:20:12.361 "rw_ios_per_sec": 0, 00:20:12.361 "rw_mbytes_per_sec": 0, 00:20:12.361 "r_mbytes_per_sec": 0, 00:20:12.361 "w_mbytes_per_sec": 0 00:20:12.361 }, 00:20:12.361 "claimed": false, 00:20:12.361 "zoned": false, 00:20:12.361 "supported_io_types": { 00:20:12.361 "read": true, 00:20:12.361 "write": true, 00:20:12.361 "unmap": false, 00:20:12.361 "flush": true, 00:20:12.361 "reset": true, 00:20:12.361 "nvme_admin": true, 00:20:12.361 "nvme_io": true, 00:20:12.361 "nvme_io_md": false, 00:20:12.361 "write_zeroes": true, 00:20:12.361 "zcopy": false, 00:20:12.361 "get_zone_info": false, 00:20:12.361 "zone_management": false, 00:20:12.361 "zone_append": false, 00:20:12.361 "compare": true, 00:20:12.361 "compare_and_write": true, 00:20:12.361 "abort": true, 00:20:12.361 "seek_hole": false, 00:20:12.361 "seek_data": false, 00:20:12.361 "copy": true, 00:20:12.361 "nvme_iov_md": false 00:20:12.361 }, 00:20:12.361 "memory_domains": [ 00:20:12.361 { 00:20:12.361 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:20:12.361 "dma_device_type": 0 00:20:12.361 } 00:20:12.361 ], 00:20:12.361 "driver_specific": { 00:20:12.361 "nvme": [ 00:20:12.361 { 00:20:12.361 "trid": { 00:20:12.361 "trtype": "RDMA", 00:20:12.361 "adrfam": "IPv4", 00:20:12.361 "traddr": "192.168.100.8", 00:20:12.361 "trsvcid": "4421", 00:20:12.361 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:12.361 }, 00:20:12.361 "ctrlr_data": { 00:20:12.361 "cntlid": 3, 00:20:12.361 "vendor_id": "0x8086", 00:20:12.361 "model_number": "SPDK bdev Controller", 00:20:12.361 "serial_number": "00000000000000000000", 00:20:12.361 "firmware_revision": "24.09", 00:20:12.361 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:12.361 "oacs": { 00:20:12.361 "security": 0, 00:20:12.361 "format": 0, 00:20:12.361 "firmware": 0, 00:20:12.361 "ns_manage": 0 00:20:12.361 }, 00:20:12.361 "multi_ctrlr": true, 00:20:12.361 "ana_reporting": false 00:20:12.361 }, 00:20:12.361 "vs": { 00:20:12.361 "nvme_version": "1.3" 00:20:12.361 }, 00:20:12.361 "ns_data": { 00:20:12.361 "id": 1, 00:20:12.361 "can_share": true 00:20:12.361 } 00:20:12.361 } 00:20:12.361 ], 00:20:12.361 "mp_policy": "active_passive" 00:20:12.361 } 00:20:12.361 } 00:20:12.361 ] 00:20:12.361 14:54:46 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.361 14:54:46 nvmf_rdma.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:12.361 14:54:46 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.361 14:54:46 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:12.361 14:54:46 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.361 14:54:46 nvmf_rdma.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.RqQrXav7q3 00:20:12.361 14:54:46 nvmf_rdma.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:20:12.361 14:54:46 nvmf_rdma.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:20:12.361 14:54:46 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:12.361 14:54:46 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:20:12.361 14:54:46 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:20:12.361 14:54:46 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:20:12.361 14:54:46 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:20:12.361 14:54:46 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:12.361 14:54:46 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:20:12.361 rmmod nvme_rdma 00:20:12.361 rmmod nvme_fabrics 00:20:12.361 14:54:46 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:12.361 14:54:46 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:20:12.361 14:54:46 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:20:12.361 14:54:46 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 2893293 ']' 00:20:12.361 14:54:46 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 2893293 00:20:12.361 14:54:46 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 2893293 ']' 00:20:12.361 14:54:46 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 2893293 00:20:12.361 14:54:46 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:20:12.361 14:54:46 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:12.361 14:54:46 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2893293 00:20:12.361 14:54:46 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:12.361 14:54:46 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:12.361 14:54:46 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2893293' 00:20:12.361 killing process with pid 2893293 00:20:12.361 14:54:46 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 2893293 00:20:12.361 14:54:46 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 2893293 00:20:12.618 14:54:46 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:12.618 14:54:46 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:20:12.618 00:20:12.618 real 0m6.838s 00:20:12.618 user 0m3.270s 00:20:12.618 sys 0m4.160s 00:20:12.618 14:54:46 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:12.618 14:54:46 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:12.618 ************************************ 00:20:12.618 END TEST nvmf_async_init 00:20:12.618 ************************************ 00:20:12.618 14:54:46 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:20:12.618 14:54:46 nvmf_rdma -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:20:12.618 14:54:46 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:12.618 14:54:46 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:12.618 14:54:46 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:20:12.875 ************************************ 00:20:12.875 START TEST dma 00:20:12.875 ************************************ 00:20:12.875 14:54:46 nvmf_rdma.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:20:12.875 * Looking for test storage... 00:20:12.875 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:20:12.875 14:54:46 nvmf_rdma.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:12.875 14:54:46 nvmf_rdma.dma -- nvmf/common.sh@7 -- # uname -s 00:20:12.875 14:54:46 nvmf_rdma.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:12.875 14:54:46 nvmf_rdma.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:12.875 14:54:46 nvmf_rdma.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:12.875 14:54:46 nvmf_rdma.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:12.875 14:54:46 nvmf_rdma.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:12.875 14:54:46 nvmf_rdma.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:12.875 14:54:46 nvmf_rdma.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:12.875 14:54:46 nvmf_rdma.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:12.875 14:54:46 nvmf_rdma.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:12.875 14:54:46 nvmf_rdma.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:12.875 14:54:46 nvmf_rdma.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:20:12.875 14:54:46 nvmf_rdma.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:20:12.875 14:54:46 nvmf_rdma.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:12.875 14:54:46 nvmf_rdma.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:12.875 14:54:46 nvmf_rdma.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:12.875 14:54:46 nvmf_rdma.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:12.875 14:54:46 nvmf_rdma.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:12.875 14:54:46 nvmf_rdma.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:12.875 14:54:46 nvmf_rdma.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:12.875 14:54:46 nvmf_rdma.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:12.875 14:54:46 nvmf_rdma.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.875 14:54:46 nvmf_rdma.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.875 14:54:46 nvmf_rdma.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.875 14:54:46 nvmf_rdma.dma -- paths/export.sh@5 -- # export PATH 00:20:12.875 14:54:46 nvmf_rdma.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.875 14:54:46 nvmf_rdma.dma -- nvmf/common.sh@47 -- # : 0 00:20:12.875 14:54:46 nvmf_rdma.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:12.875 14:54:46 nvmf_rdma.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:12.875 14:54:46 nvmf_rdma.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:12.875 14:54:46 nvmf_rdma.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:12.875 14:54:46 nvmf_rdma.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:12.875 14:54:46 nvmf_rdma.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:12.875 14:54:46 nvmf_rdma.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:12.875 14:54:46 nvmf_rdma.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:12.875 14:54:46 nvmf_rdma.dma -- host/dma.sh@12 -- # '[' rdma '!=' rdma ']' 00:20:12.875 14:54:46 nvmf_rdma.dma -- host/dma.sh@16 -- # MALLOC_BDEV_SIZE=256 00:20:12.875 14:54:46 nvmf_rdma.dma -- host/dma.sh@17 -- # MALLOC_BLOCK_SIZE=512 00:20:12.875 14:54:46 nvmf_rdma.dma -- host/dma.sh@18 -- # subsystem=0 00:20:12.875 14:54:46 nvmf_rdma.dma -- host/dma.sh@93 -- # nvmftestinit 00:20:12.875 14:54:46 nvmf_rdma.dma -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:20:12.875 14:54:46 nvmf_rdma.dma -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:12.875 14:54:46 nvmf_rdma.dma -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:12.875 14:54:46 nvmf_rdma.dma -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:12.875 14:54:46 nvmf_rdma.dma -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:12.875 14:54:46 nvmf_rdma.dma -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:12.875 14:54:46 nvmf_rdma.dma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:12.875 14:54:46 nvmf_rdma.dma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:12.875 14:54:46 nvmf_rdma.dma -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:12.875 14:54:46 nvmf_rdma.dma -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:12.875 14:54:46 nvmf_rdma.dma -- nvmf/common.sh@285 -- # xtrace_disable 00:20:12.875 14:54:46 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:20:18.133 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:18.133 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@291 -- # pci_devs=() 00:20:18.133 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:18.133 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:18.133 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:18.133 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:18.133 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:18.133 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@295 -- # net_devs=() 00:20:18.133 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:18.133 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@296 -- # e810=() 00:20:18.133 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@296 -- # local -ga e810 00:20:18.133 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@297 -- # x722=() 00:20:18.133 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@297 -- # local -ga x722 00:20:18.133 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@298 -- # mlx=() 00:20:18.133 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@298 -- # local -ga mlx 00:20:18.133 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:18.133 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:18.133 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:18.133 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:18.133 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:18.133 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:18.133 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:18.133 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:18.133 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:18.133 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:18.133 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:18.133 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:18.133 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:20:18.133 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:20:18.133 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:20:18.133 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:20:18.133 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:20:18.133 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:18.133 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:18.133 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:20:18.133 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:20:18.133 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:20:18.133 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:20:18.133 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:18.133 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:18.133 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:20:18.133 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:20:18.133 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:18.133 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:20:18.133 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:20:18.133 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:20:18.133 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:20:18.133 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:18.133 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:18.133 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:20:18.133 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:20:18.133 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:18.133 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:20:18.133 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:18.133 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:18.133 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:20:18.133 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:18.133 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:18.133 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:20:18.133 Found net devices under 0000:da:00.0: mlx_0_0 00:20:18.133 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:18.133 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:18.133 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:18.133 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:20:18.133 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:18.133 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:18.133 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:20:18.133 Found net devices under 0000:da:00.1: mlx_0_1 00:20:18.133 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:18.133 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:18.133 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@414 -- # is_hw=yes 00:20:18.133 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:18.133 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:20:18.133 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:20:18.133 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@420 -- # rdma_device_init 00:20:18.133 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:20:18.133 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@58 -- # uname 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@62 -- # modprobe ib_cm 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@63 -- # modprobe ib_core 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@64 -- # modprobe ib_umad 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@66 -- # modprobe iw_cm 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@502 -- # allocate_nic_ips 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@73 -- # get_rdma_if_list 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@104 -- # echo mlx_0_0 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@105 -- # continue 2 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@104 -- # echo mlx_0_1 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@105 -- # continue 2 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:20:18.134 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:18.134 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:20:18.134 altname enp218s0f0np0 00:20:18.134 altname ens818f0np0 00:20:18.134 inet 192.168.100.8/24 scope global mlx_0_0 00:20:18.134 valid_lft forever preferred_lft forever 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:20:18.134 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:18.134 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:20:18.134 altname enp218s0f1np1 00:20:18.134 altname ens818f1np1 00:20:18.134 inet 192.168.100.9/24 scope global mlx_0_1 00:20:18.134 valid_lft forever preferred_lft forever 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@422 -- # return 0 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@86 -- # get_rdma_if_list 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@104 -- # echo mlx_0_0 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@105 -- # continue 2 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@104 -- # echo mlx_0_1 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@105 -- # continue 2 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:20:18.134 192.168.100.9' 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:20:18.134 192.168.100.9' 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@457 -- # head -n 1 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:20:18.134 192.168.100.9' 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@458 -- # tail -n +2 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@458 -- # head -n 1 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:20:18.134 14:54:51 nvmf_rdma.dma -- host/dma.sh@94 -- # nvmfappstart -m 0x3 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:18.134 14:54:51 nvmf_rdma.dma -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:18.134 14:54:51 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@481 -- # nvmfpid=2896600 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:20:18.134 14:54:51 nvmf_rdma.dma -- nvmf/common.sh@482 -- # waitforlisten 2896600 00:20:18.134 14:54:51 nvmf_rdma.dma -- common/autotest_common.sh@829 -- # '[' -z 2896600 ']' 00:20:18.134 14:54:51 nvmf_rdma.dma -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:18.134 14:54:51 nvmf_rdma.dma -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:18.134 14:54:51 nvmf_rdma.dma -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:18.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:18.134 14:54:51 nvmf_rdma.dma -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:18.134 14:54:51 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:20:18.134 [2024-07-15 14:54:51.996761] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:20:18.134 [2024-07-15 14:54:51.996803] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:18.134 EAL: No free 2048 kB hugepages reported on node 1 00:20:18.134 [2024-07-15 14:54:52.050467] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:18.392 [2024-07-15 14:54:52.129659] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:18.392 [2024-07-15 14:54:52.129691] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:18.392 [2024-07-15 14:54:52.129698] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:18.392 [2024-07-15 14:54:52.129704] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:18.392 [2024-07-15 14:54:52.129709] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:18.392 [2024-07-15 14:54:52.129751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:18.392 [2024-07-15 14:54:52.129754] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:18.957 14:54:52 nvmf_rdma.dma -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:18.957 14:54:52 nvmf_rdma.dma -- common/autotest_common.sh@862 -- # return 0 00:20:18.957 14:54:52 nvmf_rdma.dma -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:18.957 14:54:52 nvmf_rdma.dma -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:18.957 14:54:52 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:20:18.957 14:54:52 nvmf_rdma.dma -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:18.957 14:54:52 nvmf_rdma.dma -- host/dma.sh@96 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:20:18.957 14:54:52 nvmf_rdma.dma -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.957 14:54:52 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:20:18.957 [2024-07-15 14:54:52.853096] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x16573c0/0x165b8b0) succeed. 00:20:18.957 [2024-07-15 14:54:52.862965] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1658870/0x169cf40) succeed. 00:20:19.215 14:54:52 nvmf_rdma.dma -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.215 14:54:52 nvmf_rdma.dma -- host/dma.sh@97 -- # rpc_cmd bdev_malloc_create 256 512 -b Malloc0 00:20:19.215 14:54:52 nvmf_rdma.dma -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.215 14:54:52 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:20:19.215 Malloc0 00:20:19.215 14:54:52 nvmf_rdma.dma -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.215 14:54:52 nvmf_rdma.dma -- host/dma.sh@98 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000001 00:20:19.215 14:54:52 nvmf_rdma.dma -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.215 14:54:52 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:20:19.215 14:54:52 nvmf_rdma.dma -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.215 14:54:52 nvmf_rdma.dma -- host/dma.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:20:19.215 14:54:52 nvmf_rdma.dma -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.215 14:54:52 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:20:19.215 14:54:53 nvmf_rdma.dma -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.216 14:54:53 nvmf_rdma.dma -- host/dma.sh@100 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:20:19.216 14:54:53 nvmf_rdma.dma -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.216 14:54:53 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:20:19.216 [2024-07-15 14:54:53.005342] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:19.216 14:54:53 nvmf_rdma.dma -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.216 14:54:53 nvmf_rdma.dma -- host/dma.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Nvme0n1 -f -x translate 00:20:19.216 14:54:53 nvmf_rdma.dma -- host/dma.sh@104 -- # gen_nvmf_target_json 0 00:20:19.216 14:54:53 nvmf_rdma.dma -- nvmf/common.sh@532 -- # config=() 00:20:19.216 14:54:53 nvmf_rdma.dma -- nvmf/common.sh@532 -- # local subsystem config 00:20:19.216 14:54:53 nvmf_rdma.dma -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:19.216 14:54:53 nvmf_rdma.dma -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:19.216 { 00:20:19.216 "params": { 00:20:19.216 "name": "Nvme$subsystem", 00:20:19.216 "trtype": "$TEST_TRANSPORT", 00:20:19.216 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:19.216 "adrfam": "ipv4", 00:20:19.216 "trsvcid": "$NVMF_PORT", 00:20:19.216 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:19.216 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:19.216 "hdgst": ${hdgst:-false}, 00:20:19.216 "ddgst": ${ddgst:-false} 00:20:19.216 }, 00:20:19.216 "method": "bdev_nvme_attach_controller" 00:20:19.216 } 00:20:19.216 EOF 00:20:19.216 )") 00:20:19.216 14:54:53 nvmf_rdma.dma -- nvmf/common.sh@554 -- # cat 00:20:19.216 14:54:53 nvmf_rdma.dma -- nvmf/common.sh@556 -- # jq . 00:20:19.216 14:54:53 nvmf_rdma.dma -- nvmf/common.sh@557 -- # IFS=, 00:20:19.216 14:54:53 nvmf_rdma.dma -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:19.216 "params": { 00:20:19.216 "name": "Nvme0", 00:20:19.216 "trtype": "rdma", 00:20:19.216 "traddr": "192.168.100.8", 00:20:19.216 "adrfam": "ipv4", 00:20:19.216 "trsvcid": "4420", 00:20:19.216 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:19.216 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:19.216 "hdgst": false, 00:20:19.216 "ddgst": false 00:20:19.216 }, 00:20:19.216 "method": "bdev_nvme_attach_controller" 00:20:19.216 }' 00:20:19.216 [2024-07-15 14:54:53.048010] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:20:19.216 [2024-07-15 14:54:53.048052] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2896846 ] 00:20:19.216 EAL: No free 2048 kB hugepages reported on node 1 00:20:19.216 [2024-07-15 14:54:53.096179] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:19.482 [2024-07-15 14:54:53.170265] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:19.482 [2024-07-15 14:54:53.170268] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:24.753 bdev Nvme0n1 reports 1 memory domains 00:20:24.753 bdev Nvme0n1 supports RDMA memory domain 00:20:24.753 Initialization complete, running randrw IO for 5 sec on 2 cores 00:20:24.753 ========================================================================== 00:20:24.753 Latency [us] 00:20:24.753 IOPS MiB/s Average min max 00:20:24.753 Core 2: 21847.45 85.34 731.67 259.55 8849.27 00:20:24.753 Core 3: 21845.25 85.33 731.71 258.51 9005.01 00:20:24.753 ========================================================================== 00:20:24.753 Total : 43692.69 170.67 731.69 258.51 9005.01 00:20:24.753 00:20:24.753 Total operations: 218495, translate 218495 pull_push 0 memzero 0 00:20:24.753 14:54:58 nvmf_rdma.dma -- host/dma.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Malloc0 -x pull_push 00:20:24.753 14:54:58 nvmf_rdma.dma -- host/dma.sh@107 -- # gen_malloc_json 00:20:24.753 14:54:58 nvmf_rdma.dma -- host/dma.sh@21 -- # jq . 00:20:24.753 [2024-07-15 14:54:58.608148] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:20:24.753 [2024-07-15 14:54:58.608197] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2897760 ] 00:20:24.753 EAL: No free 2048 kB hugepages reported on node 1 00:20:24.753 [2024-07-15 14:54:58.656918] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:25.011 [2024-07-15 14:54:58.731209] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:25.012 [2024-07-15 14:54:58.731211] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:30.274 bdev Malloc0 reports 2 memory domains 00:20:30.274 bdev Malloc0 doesn't support RDMA memory domain 00:20:30.274 Initialization complete, running randrw IO for 5 sec on 2 cores 00:20:30.274 ========================================================================== 00:20:30.274 Latency [us] 00:20:30.274 IOPS MiB/s Average min max 00:20:30.274 Core 2: 14607.62 57.06 1094.58 411.97 2504.62 00:20:30.274 Core 3: 14581.43 56.96 1096.53 448.45 1774.68 00:20:30.274 ========================================================================== 00:20:30.274 Total : 29189.05 114.02 1095.56 411.97 2504.62 00:20:30.274 00:20:30.274 Total operations: 145997, translate 0 pull_push 583988 memzero 0 00:20:30.274 14:55:04 nvmf_rdma.dma -- host/dma.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randread -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x memzero 00:20:30.274 14:55:04 nvmf_rdma.dma -- host/dma.sh@110 -- # gen_lvol_nvme_json 0 00:20:30.274 14:55:04 nvmf_rdma.dma -- host/dma.sh@48 -- # local subsystem=0 00:20:30.274 14:55:04 nvmf_rdma.dma -- host/dma.sh@50 -- # jq . 00:20:30.274 Ignoring -M option 00:20:30.274 [2024-07-15 14:55:04.080014] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:20:30.274 [2024-07-15 14:55:04.080063] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2898673 ] 00:20:30.274 EAL: No free 2048 kB hugepages reported on node 1 00:20:30.274 [2024-07-15 14:55:04.128459] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:30.532 [2024-07-15 14:55:04.202490] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:30.532 [2024-07-15 14:55:04.202493] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:35.783 bdev 16a7e476-20bb-480a-b0a0-8b0d99880f6b reports 1 memory domains 00:20:35.783 bdev 16a7e476-20bb-480a-b0a0-8b0d99880f6b supports RDMA memory domain 00:20:35.783 Initialization complete, running randread IO for 5 sec on 2 cores 00:20:35.783 ========================================================================== 00:20:35.783 Latency [us] 00:20:35.783 IOPS MiB/s Average min max 00:20:35.783 Core 2: 80815.30 315.68 197.29 79.79 2757.66 00:20:35.783 Core 3: 82007.38 320.34 194.39 67.98 2694.29 00:20:35.783 ========================================================================== 00:20:35.783 Total : 162822.68 636.03 195.83 67.98 2757.66 00:20:35.783 00:20:35.783 Total operations: 814201, translate 0 pull_push 0 memzero 814201 00:20:35.783 14:55:09 nvmf_rdma.dma -- host/dma.sh@113 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 16 -o 4096 -w write -t 1 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420' 00:20:35.783 EAL: No free 2048 kB hugepages reported on node 1 00:20:36.040 [2024-07-15 14:55:09.738354] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:38.566 Initializing NVMe Controllers 00:20:38.566 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:20:38.566 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:20:38.566 Initialization complete. Launching workers. 00:20:38.566 ======================================================== 00:20:38.566 Latency(us) 00:20:38.566 Device Information : IOPS MiB/s Average min max 00:20:38.566 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 0: 2008.91 7.85 7964.18 6558.10 8807.64 00:20:38.566 ======================================================== 00:20:38.566 Total : 2008.91 7.85 7964.18 6558.10 8807.64 00:20:38.566 00:20:38.566 14:55:12 nvmf_rdma.dma -- host/dma.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x translate 00:20:38.566 14:55:12 nvmf_rdma.dma -- host/dma.sh@116 -- # gen_lvol_nvme_json 0 00:20:38.566 14:55:12 nvmf_rdma.dma -- host/dma.sh@48 -- # local subsystem=0 00:20:38.566 14:55:12 nvmf_rdma.dma -- host/dma.sh@50 -- # jq . 00:20:38.566 [2024-07-15 14:55:12.072797] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:20:38.566 [2024-07-15 14:55:12.072847] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2899839 ] 00:20:38.566 EAL: No free 2048 kB hugepages reported on node 1 00:20:38.566 [2024-07-15 14:55:12.124555] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:38.566 [2024-07-15 14:55:12.197653] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:38.566 [2024-07-15 14:55:12.197655] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:43.874 bdev 5e4d7d84-0802-4a0c-bd42-8b486a2bd12c reports 1 memory domains 00:20:43.874 bdev 5e4d7d84-0802-4a0c-bd42-8b486a2bd12c supports RDMA memory domain 00:20:43.874 Initialization complete, running randrw IO for 5 sec on 2 cores 00:20:43.874 ========================================================================== 00:20:43.874 Latency [us] 00:20:43.874 IOPS MiB/s Average min max 00:20:43.874 Core 2: 19099.44 74.61 836.98 39.32 12177.94 00:20:43.874 Core 3: 19366.99 75.65 825.44 13.19 11816.61 00:20:43.874 ========================================================================== 00:20:43.874 Total : 38466.42 150.26 831.17 13.19 12177.94 00:20:43.874 00:20:43.874 Total operations: 192368, translate 192264 pull_push 0 memzero 104 00:20:43.874 14:55:17 nvmf_rdma.dma -- host/dma.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:20:43.874 14:55:17 nvmf_rdma.dma -- host/dma.sh@120 -- # nvmftestfini 00:20:43.874 14:55:17 nvmf_rdma.dma -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:43.874 14:55:17 nvmf_rdma.dma -- nvmf/common.sh@117 -- # sync 00:20:43.874 14:55:17 nvmf_rdma.dma -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:20:43.874 14:55:17 nvmf_rdma.dma -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:20:43.874 14:55:17 nvmf_rdma.dma -- nvmf/common.sh@120 -- # set +e 00:20:43.874 14:55:17 nvmf_rdma.dma -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:43.874 14:55:17 nvmf_rdma.dma -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:20:43.874 rmmod nvme_rdma 00:20:43.874 rmmod nvme_fabrics 00:20:43.874 14:55:17 nvmf_rdma.dma -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:43.874 14:55:17 nvmf_rdma.dma -- nvmf/common.sh@124 -- # set -e 00:20:43.874 14:55:17 nvmf_rdma.dma -- nvmf/common.sh@125 -- # return 0 00:20:43.874 14:55:17 nvmf_rdma.dma -- nvmf/common.sh@489 -- # '[' -n 2896600 ']' 00:20:43.874 14:55:17 nvmf_rdma.dma -- nvmf/common.sh@490 -- # killprocess 2896600 00:20:43.874 14:55:17 nvmf_rdma.dma -- common/autotest_common.sh@948 -- # '[' -z 2896600 ']' 00:20:43.874 14:55:17 nvmf_rdma.dma -- common/autotest_common.sh@952 -- # kill -0 2896600 00:20:43.874 14:55:17 nvmf_rdma.dma -- common/autotest_common.sh@953 -- # uname 00:20:43.874 14:55:17 nvmf_rdma.dma -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:43.874 14:55:17 nvmf_rdma.dma -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2896600 00:20:43.874 14:55:17 nvmf_rdma.dma -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:43.874 14:55:17 nvmf_rdma.dma -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:43.874 14:55:17 nvmf_rdma.dma -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2896600' 00:20:43.874 killing process with pid 2896600 00:20:43.874 14:55:17 nvmf_rdma.dma -- common/autotest_common.sh@967 -- # kill 2896600 00:20:43.874 14:55:17 nvmf_rdma.dma -- common/autotest_common.sh@972 -- # wait 2896600 00:20:44.131 14:55:18 nvmf_rdma.dma -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:44.131 14:55:18 nvmf_rdma.dma -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:20:44.131 00:20:44.131 real 0m31.481s 00:20:44.131 user 1m36.128s 00:20:44.131 sys 0m4.998s 00:20:44.131 14:55:18 nvmf_rdma.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:44.131 14:55:18 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:20:44.131 ************************************ 00:20:44.131 END TEST dma 00:20:44.131 ************************************ 00:20:44.389 14:55:18 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:20:44.389 14:55:18 nvmf_rdma -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:20:44.389 14:55:18 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:44.389 14:55:18 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:44.389 14:55:18 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:20:44.389 ************************************ 00:20:44.389 START TEST nvmf_identify 00:20:44.389 ************************************ 00:20:44.389 14:55:18 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:20:44.389 * Looking for test storage... 00:20:44.389 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:20:44.389 14:55:18 nvmf_rdma.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:44.389 14:55:18 nvmf_rdma.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:20:44.389 14:55:18 nvmf_rdma.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:44.389 14:55:18 nvmf_rdma.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:44.389 14:55:18 nvmf_rdma.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:44.389 14:55:18 nvmf_rdma.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:44.389 14:55:18 nvmf_rdma.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:44.389 14:55:18 nvmf_rdma.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:44.389 14:55:18 nvmf_rdma.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:44.389 14:55:18 nvmf_rdma.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:44.389 14:55:18 nvmf_rdma.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:44.389 14:55:18 nvmf_rdma.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:44.389 14:55:18 nvmf_rdma.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:20:44.389 14:55:18 nvmf_rdma.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:20:44.389 14:55:18 nvmf_rdma.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:44.389 14:55:18 nvmf_rdma.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:44.389 14:55:18 nvmf_rdma.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:44.389 14:55:18 nvmf_rdma.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:44.389 14:55:18 nvmf_rdma.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:44.389 14:55:18 nvmf_rdma.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:44.389 14:55:18 nvmf_rdma.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:44.389 14:55:18 nvmf_rdma.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:44.389 14:55:18 nvmf_rdma.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.390 14:55:18 nvmf_rdma.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.390 14:55:18 nvmf_rdma.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.390 14:55:18 nvmf_rdma.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:20:44.390 14:55:18 nvmf_rdma.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.390 14:55:18 nvmf_rdma.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:20:44.390 14:55:18 nvmf_rdma.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:44.390 14:55:18 nvmf_rdma.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:44.390 14:55:18 nvmf_rdma.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:44.390 14:55:18 nvmf_rdma.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:44.390 14:55:18 nvmf_rdma.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:44.390 14:55:18 nvmf_rdma.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:44.390 14:55:18 nvmf_rdma.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:44.390 14:55:18 nvmf_rdma.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:44.390 14:55:18 nvmf_rdma.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:44.390 14:55:18 nvmf_rdma.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:44.390 14:55:18 nvmf_rdma.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:20:44.390 14:55:18 nvmf_rdma.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:20:44.390 14:55:18 nvmf_rdma.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:44.390 14:55:18 nvmf_rdma.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:44.390 14:55:18 nvmf_rdma.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:44.390 14:55:18 nvmf_rdma.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:44.390 14:55:18 nvmf_rdma.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:44.390 14:55:18 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:44.390 14:55:18 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:44.390 14:55:18 nvmf_rdma.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:44.390 14:55:18 nvmf_rdma.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:44.390 14:55:18 nvmf_rdma.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:20:44.390 14:55:18 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:49.654 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:49.654 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:20:49.654 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:49.654 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:49.654 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:49.654 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:49.654 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:49.654 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:20:49.655 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:20:49.655 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:20:49.655 Found net devices under 0000:da:00.0: mlx_0_0 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:20:49.655 Found net devices under 0000:da:00.1: mlx_0_1 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@420 -- # rdma_device_init 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@58 -- # uname 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@62 -- # modprobe ib_cm 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@63 -- # modprobe ib_core 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@64 -- # modprobe ib_umad 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@66 -- # modprobe iw_cm 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@502 -- # allocate_nic_ips 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@73 -- # get_rdma_if_list 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@104 -- # echo mlx_0_0 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@105 -- # continue 2 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@104 -- # echo mlx_0_1 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@105 -- # continue 2 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:20:49.655 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:49.655 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:20:49.655 altname enp218s0f0np0 00:20:49.655 altname ens818f0np0 00:20:49.655 inet 192.168.100.8/24 scope global mlx_0_0 00:20:49.655 valid_lft forever preferred_lft forever 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:20:49.655 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:49.655 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:20:49.655 altname enp218s0f1np1 00:20:49.655 altname ens818f1np1 00:20:49.655 inet 192.168.100.9/24 scope global mlx_0_1 00:20:49.655 valid_lft forever preferred_lft forever 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@86 -- # get_rdma_if_list 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:20:49.655 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:49.912 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:20:49.912 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:49.912 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:49.912 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:49.912 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@104 -- # echo mlx_0_0 00:20:49.912 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@105 -- # continue 2 00:20:49.912 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:49.912 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:49.912 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:49.912 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:49.912 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:49.912 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@104 -- # echo mlx_0_1 00:20:49.912 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@105 -- # continue 2 00:20:49.912 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:20:49.912 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:20:49.912 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:20:49.912 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:20:49.912 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:49.912 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:49.912 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:20:49.912 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:20:49.912 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:49.912 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:49.912 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:49.912 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:49.912 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:20:49.912 192.168.100.9' 00:20:49.912 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:20:49.912 192.168.100.9' 00:20:49.912 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@457 -- # head -n 1 00:20:49.912 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:49.912 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:20:49.912 192.168.100.9' 00:20:49.912 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@458 -- # tail -n +2 00:20:49.912 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@458 -- # head -n 1 00:20:49.912 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:49.912 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:20:49.912 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:49.912 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:20:49.912 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:20:49.912 14:55:23 nvmf_rdma.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:20:49.912 14:55:23 nvmf_rdma.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:20:49.912 14:55:23 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:49.912 14:55:23 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:49.912 14:55:23 nvmf_rdma.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2904021 00:20:49.912 14:55:23 nvmf_rdma.nvmf_identify -- host/identify.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:49.912 14:55:23 nvmf_rdma.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:49.912 14:55:23 nvmf_rdma.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2904021 00:20:49.912 14:55:23 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 2904021 ']' 00:20:49.912 14:55:23 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:49.912 14:55:23 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:49.912 14:55:23 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:49.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:49.912 14:55:23 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:49.912 14:55:23 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:49.912 [2024-07-15 14:55:23.703768] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:20:49.912 [2024-07-15 14:55:23.703811] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:49.912 EAL: No free 2048 kB hugepages reported on node 1 00:20:49.912 [2024-07-15 14:55:23.758726] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:50.169 [2024-07-15 14:55:23.832419] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:50.169 [2024-07-15 14:55:23.832461] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:50.169 [2024-07-15 14:55:23.832468] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:50.169 [2024-07-15 14:55:23.832474] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:50.169 [2024-07-15 14:55:23.832478] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:50.169 [2024-07-15 14:55:23.832528] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:50.169 [2024-07-15 14:55:23.832554] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:50.169 [2024-07-15 14:55:23.832617] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:50.169 [2024-07-15 14:55:23.832618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:50.733 14:55:24 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:50.733 14:55:24 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:20:50.733 14:55:24 nvmf_rdma.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:20:50.733 14:55:24 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.733 14:55:24 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:50.733 [2024-07-15 14:55:24.522285] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x16b9cc0/0x16be1b0) succeed. 00:20:50.733 [2024-07-15 14:55:24.531400] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x16bb300/0x16ff840) succeed. 00:20:50.733 14:55:24 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.733 14:55:24 nvmf_rdma.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:20:50.733 14:55:24 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:50.733 14:55:24 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:51.088 14:55:24 nvmf_rdma.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:51.088 14:55:24 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.088 14:55:24 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:51.088 Malloc0 00:20:51.088 14:55:24 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.088 14:55:24 nvmf_rdma.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:51.088 14:55:24 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.088 14:55:24 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:51.088 14:55:24 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.088 14:55:24 nvmf_rdma.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:20:51.088 14:55:24 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.088 14:55:24 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:51.088 14:55:24 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.088 14:55:24 nvmf_rdma.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:51.088 14:55:24 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.088 14:55:24 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:51.088 [2024-07-15 14:55:24.729723] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:51.088 14:55:24 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.088 14:55:24 nvmf_rdma.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:20:51.088 14:55:24 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.088 14:55:24 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:51.088 14:55:24 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.088 14:55:24 nvmf_rdma.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:20:51.088 14:55:24 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.088 14:55:24 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:51.088 [ 00:20:51.088 { 00:20:51.088 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:51.088 "subtype": "Discovery", 00:20:51.088 "listen_addresses": [ 00:20:51.088 { 00:20:51.088 "trtype": "RDMA", 00:20:51.088 "adrfam": "IPv4", 00:20:51.088 "traddr": "192.168.100.8", 00:20:51.088 "trsvcid": "4420" 00:20:51.088 } 00:20:51.088 ], 00:20:51.088 "allow_any_host": true, 00:20:51.088 "hosts": [] 00:20:51.088 }, 00:20:51.088 { 00:20:51.088 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:51.088 "subtype": "NVMe", 00:20:51.088 "listen_addresses": [ 00:20:51.088 { 00:20:51.088 "trtype": "RDMA", 00:20:51.088 "adrfam": "IPv4", 00:20:51.088 "traddr": "192.168.100.8", 00:20:51.088 "trsvcid": "4420" 00:20:51.088 } 00:20:51.088 ], 00:20:51.088 "allow_any_host": true, 00:20:51.088 "hosts": [], 00:20:51.088 "serial_number": "SPDK00000000000001", 00:20:51.088 "model_number": "SPDK bdev Controller", 00:20:51.088 "max_namespaces": 32, 00:20:51.088 "min_cntlid": 1, 00:20:51.088 "max_cntlid": 65519, 00:20:51.088 "namespaces": [ 00:20:51.088 { 00:20:51.088 "nsid": 1, 00:20:51.088 "bdev_name": "Malloc0", 00:20:51.088 "name": "Malloc0", 00:20:51.088 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:20:51.088 "eui64": "ABCDEF0123456789", 00:20:51.088 "uuid": "a16908ae-2d0a-489f-8d24-6a3aa94b84db" 00:20:51.088 } 00:20:51.088 ] 00:20:51.088 } 00:20:51.088 ] 00:20:51.088 14:55:24 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.088 14:55:24 nvmf_rdma.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:20:51.088 [2024-07-15 14:55:24.780022] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:20:51.088 [2024-07-15 14:55:24.780064] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2904092 ] 00:20:51.088 EAL: No free 2048 kB hugepages reported on node 1 00:20:51.088 [2024-07-15 14:55:24.821565] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:20:51.088 [2024-07-15 14:55:24.821636] nvme_rdma.c:2192:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:20:51.088 [2024-07-15 14:55:24.821648] nvme_rdma.c:1211:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:20:51.088 [2024-07-15 14:55:24.821652] nvme_rdma.c:1215:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:20:51.088 [2024-07-15 14:55:24.821680] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:20:51.088 [2024-07-15 14:55:24.833439] nvme_rdma.c: 430:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:20:51.088 [2024-07-15 14:55:24.843706] nvme_rdma.c:1100:nvme_rdma_connect_established: *DEBUG*: rc =0 00:20:51.088 [2024-07-15 14:55:24.843715] nvme_rdma.c:1105:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:20:51.088 [2024-07-15 14:55:24.843721] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x181300 00:20:51.088 [2024-07-15 14:55:24.843726] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x181300 00:20:51.088 [2024-07-15 14:55:24.843730] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x181300 00:20:51.088 [2024-07-15 14:55:24.843735] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x181300 00:20:51.088 [2024-07-15 14:55:24.843739] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x181300 00:20:51.088 [2024-07-15 14:55:24.843743] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x181300 00:20:51.088 [2024-07-15 14:55:24.843747] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x181300 00:20:51.088 [2024-07-15 14:55:24.843751] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x181300 00:20:51.088 [2024-07-15 14:55:24.843756] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x181300 00:20:51.088 [2024-07-15 14:55:24.843760] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x181300 00:20:51.088 [2024-07-15 14:55:24.843764] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x181300 00:20:51.088 [2024-07-15 14:55:24.843768] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x181300 00:20:51.088 [2024-07-15 14:55:24.843772] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x181300 00:20:51.089 [2024-07-15 14:55:24.843777] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x181300 00:20:51.089 [2024-07-15 14:55:24.843781] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x181300 00:20:51.089 [2024-07-15 14:55:24.843785] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x181300 00:20:51.089 [2024-07-15 14:55:24.843789] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x181300 00:20:51.089 [2024-07-15 14:55:24.843793] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x181300 00:20:51.089 [2024-07-15 14:55:24.843797] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x181300 00:20:51.089 [2024-07-15 14:55:24.843801] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x181300 00:20:51.089 [2024-07-15 14:55:24.843806] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x181300 00:20:51.089 [2024-07-15 14:55:24.843810] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x181300 00:20:51.089 [2024-07-15 14:55:24.843817] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x181300 00:20:51.089 [2024-07-15 14:55:24.843821] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x181300 00:20:51.089 [2024-07-15 14:55:24.843826] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x181300 00:20:51.089 [2024-07-15 14:55:24.843830] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x181300 00:20:51.089 [2024-07-15 14:55:24.843834] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x181300 00:20:51.089 [2024-07-15 14:55:24.843838] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x181300 00:20:51.089 [2024-07-15 14:55:24.843842] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x181300 00:20:51.089 [2024-07-15 14:55:24.843846] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x181300 00:20:51.089 [2024-07-15 14:55:24.843850] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x181300 00:20:51.089 [2024-07-15 14:55:24.843854] nvme_rdma.c:1119:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:20:51.089 [2024-07-15 14:55:24.843859] nvme_rdma.c:1122:nvme_rdma_connect_established: *DEBUG*: rc =0 00:20:51.089 [2024-07-15 14:55:24.843861] nvme_rdma.c:1127:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:20:51.089 [2024-07-15 14:55:24.843877] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181300 00:20:51.089 [2024-07-15 14:55:24.843889] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf1c0 len:0x400 key:0x181300 00:20:51.089 [2024-07-15 14:55:24.848544] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.089 [2024-07-15 14:55:24.848552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:20:51.089 [2024-07-15 14:55:24.848558] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x181300 00:20:51.089 [2024-07-15 14:55:24.848563] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:51.089 [2024-07-15 14:55:24.848569] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:20:51.089 [2024-07-15 14:55:24.848573] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:20:51.089 [2024-07-15 14:55:24.848587] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181300 00:20:51.089 [2024-07-15 14:55:24.848593] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.089 [2024-07-15 14:55:24.848621] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.089 [2024-07-15 14:55:24.848625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:20:51.089 [2024-07-15 14:55:24.848629] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:20:51.089 [2024-07-15 14:55:24.848633] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x181300 00:20:51.089 [2024-07-15 14:55:24.848638] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:20:51.089 [2024-07-15 14:55:24.848644] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181300 00:20:51.089 [2024-07-15 14:55:24.848650] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.089 [2024-07-15 14:55:24.848672] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.089 [2024-07-15 14:55:24.848678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:20:51.089 [2024-07-15 14:55:24.848683] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:20:51.089 [2024-07-15 14:55:24.848687] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x181300 00:20:51.089 [2024-07-15 14:55:24.848692] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:20:51.089 [2024-07-15 14:55:24.848697] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181300 00:20:51.089 [2024-07-15 14:55:24.848703] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.089 [2024-07-15 14:55:24.848722] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.089 [2024-07-15 14:55:24.848727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:51.089 [2024-07-15 14:55:24.848731] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:51.089 [2024-07-15 14:55:24.848735] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x181300 00:20:51.089 [2024-07-15 14:55:24.848741] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181300 00:20:51.089 [2024-07-15 14:55:24.848747] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.089 [2024-07-15 14:55:24.848766] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.089 [2024-07-15 14:55:24.848771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:51.089 [2024-07-15 14:55:24.848775] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:20:51.089 [2024-07-15 14:55:24.848779] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:20:51.089 [2024-07-15 14:55:24.848783] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x181300 00:20:51.089 [2024-07-15 14:55:24.848787] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:51.089 [2024-07-15 14:55:24.848892] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:20:51.089 [2024-07-15 14:55:24.848896] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:51.089 [2024-07-15 14:55:24.848904] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181300 00:20:51.089 [2024-07-15 14:55:24.848909] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.089 [2024-07-15 14:55:24.848927] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.089 [2024-07-15 14:55:24.848932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:51.089 [2024-07-15 14:55:24.848936] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:51.089 [2024-07-15 14:55:24.848940] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x181300 00:20:51.089 [2024-07-15 14:55:24.848946] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181300 00:20:51.089 [2024-07-15 14:55:24.848955] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.089 [2024-07-15 14:55:24.848970] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.089 [2024-07-15 14:55:24.848974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:20:51.089 [2024-07-15 14:55:24.848978] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:51.089 [2024-07-15 14:55:24.848982] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:20:51.089 [2024-07-15 14:55:24.848985] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x181300 00:20:51.090 [2024-07-15 14:55:24.848990] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:20:51.090 [2024-07-15 14:55:24.848996] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:20:51.090 [2024-07-15 14:55:24.849005] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181300 00:20:51.090 [2024-07-15 14:55:24.849010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x181300 00:20:51.090 [2024-07-15 14:55:24.849047] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.090 [2024-07-15 14:55:24.849051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:51.090 [2024-07-15 14:55:24.849058] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:20:51.090 [2024-07-15 14:55:24.849062] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:20:51.090 [2024-07-15 14:55:24.849065] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:20:51.090 [2024-07-15 14:55:24.849070] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:20:51.090 [2024-07-15 14:55:24.849073] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:20:51.090 [2024-07-15 14:55:24.849077] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:20:51.090 [2024-07-15 14:55:24.849081] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x181300 00:20:51.090 [2024-07-15 14:55:24.849086] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:20:51.090 [2024-07-15 14:55:24.849092] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181300 00:20:51.090 [2024-07-15 14:55:24.849098] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.090 [2024-07-15 14:55:24.849115] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.090 [2024-07-15 14:55:24.849120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:51.090 [2024-07-15 14:55:24.849126] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x181300 00:20:51.090 [2024-07-15 14:55:24.849131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.090 [2024-07-15 14:55:24.849136] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0600 length 0x40 lkey 0x181300 00:20:51.090 [2024-07-15 14:55:24.849142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.090 [2024-07-15 14:55:24.849147] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.090 [2024-07-15 14:55:24.849152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.090 [2024-07-15 14:55:24.849157] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x181300 00:20:51.090 [2024-07-15 14:55:24.849162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.090 [2024-07-15 14:55:24.849166] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:20:51.090 [2024-07-15 14:55:24.849169] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x181300 00:20:51.090 [2024-07-15 14:55:24.849178] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:51.090 [2024-07-15 14:55:24.849183] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181300 00:20:51.090 [2024-07-15 14:55:24.849189] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.090 [2024-07-15 14:55:24.849207] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.090 [2024-07-15 14:55:24.849211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:20:51.090 [2024-07-15 14:55:24.849216] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:20:51.090 [2024-07-15 14:55:24.849221] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:20:51.090 [2024-07-15 14:55:24.849225] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x181300 00:20:51.090 [2024-07-15 14:55:24.849232] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181300 00:20:51.090 [2024-07-15 14:55:24.849238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x181300 00:20:51.090 [2024-07-15 14:55:24.849259] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.090 [2024-07-15 14:55:24.849263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:51.090 [2024-07-15 14:55:24.849268] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x181300 00:20:51.090 [2024-07-15 14:55:24.849275] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:20:51.090 [2024-07-15 14:55:24.849294] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181300 00:20:51.090 [2024-07-15 14:55:24.849301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x400 key:0x181300 00:20:51.090 [2024-07-15 14:55:24.849306] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x181300 00:20:51.090 [2024-07-15 14:55:24.849311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.090 [2024-07-15 14:55:24.849333] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.090 [2024-07-15 14:55:24.849338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:51.090 [2024-07-15 14:55:24.849348] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b00 length 0x40 lkey 0x181300 00:20:51.090 [2024-07-15 14:55:24.849354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0xc00 key:0x181300 00:20:51.090 [2024-07-15 14:55:24.849358] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x181300 00:20:51.090 [2024-07-15 14:55:24.849362] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.090 [2024-07-15 14:55:24.849366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:51.090 [2024-07-15 14:55:24.849370] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x181300 00:20:51.090 [2024-07-15 14:55:24.849386] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.090 [2024-07-15 14:55:24.849390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:51.090 [2024-07-15 14:55:24.849398] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x181300 00:20:51.090 [2024-07-15 14:55:24.849403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00010070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x8 key:0x181300 00:20:51.090 [2024-07-15 14:55:24.849407] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x181300 00:20:51.090 [2024-07-15 14:55:24.849429] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.090 [2024-07-15 14:55:24.849433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:51.090 [2024-07-15 14:55:24.849441] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x181300 00:20:51.090 ===================================================== 00:20:51.090 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:51.090 ===================================================== 00:20:51.090 Controller Capabilities/Features 00:20:51.090 ================================ 00:20:51.090 Vendor ID: 0000 00:20:51.090 Subsystem Vendor ID: 0000 00:20:51.090 Serial Number: .................... 00:20:51.090 Model Number: ........................................ 00:20:51.090 Firmware Version: 24.09 00:20:51.090 Recommended Arb Burst: 0 00:20:51.090 IEEE OUI Identifier: 00 00 00 00:20:51.090 Multi-path I/O 00:20:51.090 May have multiple subsystem ports: No 00:20:51.090 May have multiple controllers: No 00:20:51.090 Associated with SR-IOV VF: No 00:20:51.091 Max Data Transfer Size: 131072 00:20:51.091 Max Number of Namespaces: 0 00:20:51.091 Max Number of I/O Queues: 1024 00:20:51.091 NVMe Specification Version (VS): 1.3 00:20:51.091 NVMe Specification Version (Identify): 1.3 00:20:51.091 Maximum Queue Entries: 128 00:20:51.091 Contiguous Queues Required: Yes 00:20:51.091 Arbitration Mechanisms Supported 00:20:51.091 Weighted Round Robin: Not Supported 00:20:51.091 Vendor Specific: Not Supported 00:20:51.091 Reset Timeout: 15000 ms 00:20:51.091 Doorbell Stride: 4 bytes 00:20:51.091 NVM Subsystem Reset: Not Supported 00:20:51.091 Command Sets Supported 00:20:51.091 NVM Command Set: Supported 00:20:51.091 Boot Partition: Not Supported 00:20:51.091 Memory Page Size Minimum: 4096 bytes 00:20:51.091 Memory Page Size Maximum: 4096 bytes 00:20:51.091 Persistent Memory Region: Not Supported 00:20:51.091 Optional Asynchronous Events Supported 00:20:51.091 Namespace Attribute Notices: Not Supported 00:20:51.091 Firmware Activation Notices: Not Supported 00:20:51.091 ANA Change Notices: Not Supported 00:20:51.091 PLE Aggregate Log Change Notices: Not Supported 00:20:51.091 LBA Status Info Alert Notices: Not Supported 00:20:51.091 EGE Aggregate Log Change Notices: Not Supported 00:20:51.091 Normal NVM Subsystem Shutdown event: Not Supported 00:20:51.091 Zone Descriptor Change Notices: Not Supported 00:20:51.091 Discovery Log Change Notices: Supported 00:20:51.091 Controller Attributes 00:20:51.091 128-bit Host Identifier: Not Supported 00:20:51.091 Non-Operational Permissive Mode: Not Supported 00:20:51.091 NVM Sets: Not Supported 00:20:51.091 Read Recovery Levels: Not Supported 00:20:51.091 Endurance Groups: Not Supported 00:20:51.091 Predictable Latency Mode: Not Supported 00:20:51.091 Traffic Based Keep ALive: Not Supported 00:20:51.091 Namespace Granularity: Not Supported 00:20:51.091 SQ Associations: Not Supported 00:20:51.091 UUID List: Not Supported 00:20:51.091 Multi-Domain Subsystem: Not Supported 00:20:51.091 Fixed Capacity Management: Not Supported 00:20:51.091 Variable Capacity Management: Not Supported 00:20:51.091 Delete Endurance Group: Not Supported 00:20:51.091 Delete NVM Set: Not Supported 00:20:51.091 Extended LBA Formats Supported: Not Supported 00:20:51.091 Flexible Data Placement Supported: Not Supported 00:20:51.091 00:20:51.091 Controller Memory Buffer Support 00:20:51.091 ================================ 00:20:51.091 Supported: No 00:20:51.091 00:20:51.091 Persistent Memory Region Support 00:20:51.091 ================================ 00:20:51.091 Supported: No 00:20:51.091 00:20:51.091 Admin Command Set Attributes 00:20:51.091 ============================ 00:20:51.091 Security Send/Receive: Not Supported 00:20:51.091 Format NVM: Not Supported 00:20:51.091 Firmware Activate/Download: Not Supported 00:20:51.091 Namespace Management: Not Supported 00:20:51.091 Device Self-Test: Not Supported 00:20:51.091 Directives: Not Supported 00:20:51.091 NVMe-MI: Not Supported 00:20:51.091 Virtualization Management: Not Supported 00:20:51.091 Doorbell Buffer Config: Not Supported 00:20:51.091 Get LBA Status Capability: Not Supported 00:20:51.091 Command & Feature Lockdown Capability: Not Supported 00:20:51.091 Abort Command Limit: 1 00:20:51.091 Async Event Request Limit: 4 00:20:51.091 Number of Firmware Slots: N/A 00:20:51.091 Firmware Slot 1 Read-Only: N/A 00:20:51.091 Firmware Activation Without Reset: N/A 00:20:51.091 Multiple Update Detection Support: N/A 00:20:51.091 Firmware Update Granularity: No Information Provided 00:20:51.091 Per-Namespace SMART Log: No 00:20:51.091 Asymmetric Namespace Access Log Page: Not Supported 00:20:51.091 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:51.091 Command Effects Log Page: Not Supported 00:20:51.091 Get Log Page Extended Data: Supported 00:20:51.091 Telemetry Log Pages: Not Supported 00:20:51.091 Persistent Event Log Pages: Not Supported 00:20:51.091 Supported Log Pages Log Page: May Support 00:20:51.091 Commands Supported & Effects Log Page: Not Supported 00:20:51.091 Feature Identifiers & Effects Log Page:May Support 00:20:51.091 NVMe-MI Commands & Effects Log Page: May Support 00:20:51.091 Data Area 4 for Telemetry Log: Not Supported 00:20:51.091 Error Log Page Entries Supported: 128 00:20:51.091 Keep Alive: Not Supported 00:20:51.091 00:20:51.091 NVM Command Set Attributes 00:20:51.091 ========================== 00:20:51.091 Submission Queue Entry Size 00:20:51.091 Max: 1 00:20:51.091 Min: 1 00:20:51.091 Completion Queue Entry Size 00:20:51.091 Max: 1 00:20:51.091 Min: 1 00:20:51.091 Number of Namespaces: 0 00:20:51.091 Compare Command: Not Supported 00:20:51.091 Write Uncorrectable Command: Not Supported 00:20:51.091 Dataset Management Command: Not Supported 00:20:51.091 Write Zeroes Command: Not Supported 00:20:51.091 Set Features Save Field: Not Supported 00:20:51.091 Reservations: Not Supported 00:20:51.091 Timestamp: Not Supported 00:20:51.091 Copy: Not Supported 00:20:51.091 Volatile Write Cache: Not Present 00:20:51.091 Atomic Write Unit (Normal): 1 00:20:51.091 Atomic Write Unit (PFail): 1 00:20:51.091 Atomic Compare & Write Unit: 1 00:20:51.091 Fused Compare & Write: Supported 00:20:51.091 Scatter-Gather List 00:20:51.091 SGL Command Set: Supported 00:20:51.091 SGL Keyed: Supported 00:20:51.091 SGL Bit Bucket Descriptor: Not Supported 00:20:51.091 SGL Metadata Pointer: Not Supported 00:20:51.091 Oversized SGL: Not Supported 00:20:51.091 SGL Metadata Address: Not Supported 00:20:51.091 SGL Offset: Supported 00:20:51.091 Transport SGL Data Block: Not Supported 00:20:51.091 Replay Protected Memory Block: Not Supported 00:20:51.091 00:20:51.091 Firmware Slot Information 00:20:51.091 ========================= 00:20:51.091 Active slot: 0 00:20:51.091 00:20:51.091 00:20:51.091 Error Log 00:20:51.091 ========= 00:20:51.091 00:20:51.091 Active Namespaces 00:20:51.091 ================= 00:20:51.091 Discovery Log Page 00:20:51.091 ================== 00:20:51.091 Generation Counter: 2 00:20:51.091 Number of Records: 2 00:20:51.091 Record Format: 0 00:20:51.091 00:20:51.091 Discovery Log Entry 0 00:20:51.091 ---------------------- 00:20:51.091 Transport Type: 1 (RDMA) 00:20:51.092 Address Family: 1 (IPv4) 00:20:51.092 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:51.092 Entry Flags: 00:20:51.092 Duplicate Returned Information: 1 00:20:51.092 Explicit Persistent Connection Support for Discovery: 1 00:20:51.092 Transport Requirements: 00:20:51.092 Secure Channel: Not Required 00:20:51.092 Port ID: 0 (0x0000) 00:20:51.092 Controller ID: 65535 (0xffff) 00:20:51.092 Admin Max SQ Size: 128 00:20:51.092 Transport Service Identifier: 4420 00:20:51.092 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:51.092 Transport Address: 192.168.100.8 00:20:51.092 Transport Specific Address Subtype - RDMA 00:20:51.092 RDMA QP Service Type: 1 (Reliable Connected) 00:20:51.092 RDMA Provider Type: 1 (No provider specified) 00:20:51.092 RDMA CM Service: 1 (RDMA_CM) 00:20:51.092 Discovery Log Entry 1 00:20:51.092 ---------------------- 00:20:51.092 Transport Type: 1 (RDMA) 00:20:51.092 Address Family: 1 (IPv4) 00:20:51.092 Subsystem Type: 2 (NVM Subsystem) 00:20:51.092 Entry Flags: 00:20:51.092 Duplicate Returned Information: 0 00:20:51.092 Explicit Persistent Connection Support for Discovery: 0 00:20:51.092 Transport Requirements: 00:20:51.092 Secure Channel: Not Required 00:20:51.092 Port ID: 0 (0x0000) 00:20:51.092 Controller ID: 65535 (0xffff) 00:20:51.092 Admin Max SQ Size: [2024-07-15 14:55:24.849504] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:20:51.092 [2024-07-15 14:55:24.849512] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 58016 doesn't match qid 00:20:51.092 [2024-07-15 14:55:24.849524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32536 cdw0:5 sqhd:dad0 p:0 m:0 dnr:0 00:20:51.092 [2024-07-15 14:55:24.849529] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 58016 doesn't match qid 00:20:51.092 [2024-07-15 14:55:24.849535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32536 cdw0:5 sqhd:dad0 p:0 m:0 dnr:0 00:20:51.092 [2024-07-15 14:55:24.849544] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 58016 doesn't match qid 00:20:51.092 [2024-07-15 14:55:24.849551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32536 cdw0:5 sqhd:dad0 p:0 m:0 dnr:0 00:20:51.092 [2024-07-15 14:55:24.849555] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 58016 doesn't match qid 00:20:51.092 [2024-07-15 14:55:24.849561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32536 cdw0:5 sqhd:dad0 p:0 m:0 dnr:0 00:20:51.092 [2024-07-15 14:55:24.849568] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x181300 00:20:51.092 [2024-07-15 14:55:24.849574] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.092 [2024-07-15 14:55:24.849593] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.092 [2024-07-15 14:55:24.849598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0010 p:0 m:0 dnr:0 00:20:51.092 [2024-07-15 14:55:24.849604] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.092 [2024-07-15 14:55:24.849609] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.092 [2024-07-15 14:55:24.849615] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x181300 00:20:51.092 [2024-07-15 14:55:24.849636] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.092 [2024-07-15 14:55:24.849640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:51.092 [2024-07-15 14:55:24.849645] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:20:51.092 [2024-07-15 14:55:24.849649] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:20:51.092 [2024-07-15 14:55:24.849653] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x181300 00:20:51.092 [2024-07-15 14:55:24.849660] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.092 [2024-07-15 14:55:24.849665] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.092 [2024-07-15 14:55:24.849681] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.092 [2024-07-15 14:55:24.849685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:20:51.092 [2024-07-15 14:55:24.849689] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x181300 00:20:51.092 [2024-07-15 14:55:24.849696] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.092 [2024-07-15 14:55:24.849702] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.092 [2024-07-15 14:55:24.849720] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.092 [2024-07-15 14:55:24.849724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:20:51.092 [2024-07-15 14:55:24.849728] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x181300 00:20:51.092 [2024-07-15 14:55:24.849734] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.092 [2024-07-15 14:55:24.849741] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.092 [2024-07-15 14:55:24.849760] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.092 [2024-07-15 14:55:24.849764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:20:51.092 [2024-07-15 14:55:24.849769] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x181300 00:20:51.092 [2024-07-15 14:55:24.849776] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.092 [2024-07-15 14:55:24.849782] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.092 [2024-07-15 14:55:24.849804] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.092 [2024-07-15 14:55:24.849809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:20:51.092 [2024-07-15 14:55:24.849814] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x181300 00:20:51.092 [2024-07-15 14:55:24.849820] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.092 [2024-07-15 14:55:24.849826] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.092 [2024-07-15 14:55:24.849846] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.092 [2024-07-15 14:55:24.849852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:20:51.092 [2024-07-15 14:55:24.849856] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x181300 00:20:51.092 [2024-07-15 14:55:24.849864] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.092 [2024-07-15 14:55:24.849870] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.092 [2024-07-15 14:55:24.849887] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.092 [2024-07-15 14:55:24.849891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:20:51.092 [2024-07-15 14:55:24.849896] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x181300 00:20:51.092 [2024-07-15 14:55:24.849903] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.092 [2024-07-15 14:55:24.849908] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.092 [2024-07-15 14:55:24.849932] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.092 [2024-07-15 14:55:24.849937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:20:51.093 [2024-07-15 14:55:24.849942] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x181300 00:20:51.093 [2024-07-15 14:55:24.849949] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.093 [2024-07-15 14:55:24.849955] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.093 [2024-07-15 14:55:24.849973] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.093 [2024-07-15 14:55:24.849978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:20:51.093 [2024-07-15 14:55:24.849982] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x181300 00:20:51.093 [2024-07-15 14:55:24.849989] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.093 [2024-07-15 14:55:24.849995] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.093 [2024-07-15 14:55:24.850012] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.093 [2024-07-15 14:55:24.850016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:20:51.093 [2024-07-15 14:55:24.850020] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x181300 00:20:51.093 [2024-07-15 14:55:24.850027] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.093 [2024-07-15 14:55:24.850033] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.093 [2024-07-15 14:55:24.850051] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.093 [2024-07-15 14:55:24.850056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:20:51.093 [2024-07-15 14:55:24.850060] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x181300 00:20:51.093 [2024-07-15 14:55:24.850066] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.093 [2024-07-15 14:55:24.850072] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.093 [2024-07-15 14:55:24.850087] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.093 [2024-07-15 14:55:24.850092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:20:51.093 [2024-07-15 14:55:24.850096] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x181300 00:20:51.093 [2024-07-15 14:55:24.850103] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.093 [2024-07-15 14:55:24.850110] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.093 [2024-07-15 14:55:24.850131] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.093 [2024-07-15 14:55:24.850135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:20:51.093 [2024-07-15 14:55:24.850139] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x181300 00:20:51.093 [2024-07-15 14:55:24.850146] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.093 [2024-07-15 14:55:24.850151] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.093 [2024-07-15 14:55:24.850173] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.093 [2024-07-15 14:55:24.850177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:20:51.093 [2024-07-15 14:55:24.850181] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x181300 00:20:51.093 [2024-07-15 14:55:24.850187] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.093 [2024-07-15 14:55:24.850193] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.093 [2024-07-15 14:55:24.850213] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.093 [2024-07-15 14:55:24.850217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:20:51.093 [2024-07-15 14:55:24.850221] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x181300 00:20:51.093 [2024-07-15 14:55:24.850228] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.093 [2024-07-15 14:55:24.850233] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.093 [2024-07-15 14:55:24.850253] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.093 [2024-07-15 14:55:24.850257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:20:51.093 [2024-07-15 14:55:24.850261] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x181300 00:20:51.093 [2024-07-15 14:55:24.850268] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.093 [2024-07-15 14:55:24.850274] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.093 [2024-07-15 14:55:24.850292] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.093 [2024-07-15 14:55:24.850296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:20:51.093 [2024-07-15 14:55:24.850300] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x181300 00:20:51.093 [2024-07-15 14:55:24.850307] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.093 [2024-07-15 14:55:24.850312] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.093 [2024-07-15 14:55:24.850332] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.093 [2024-07-15 14:55:24.850337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:20:51.093 [2024-07-15 14:55:24.850341] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x181300 00:20:51.093 [2024-07-15 14:55:24.850347] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.093 [2024-07-15 14:55:24.850353] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.093 [2024-07-15 14:55:24.850373] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.093 [2024-07-15 14:55:24.850377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:20:51.093 [2024-07-15 14:55:24.850381] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x181300 00:20:51.093 [2024-07-15 14:55:24.850388] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.093 [2024-07-15 14:55:24.850393] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.093 [2024-07-15 14:55:24.850413] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.093 [2024-07-15 14:55:24.850417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:20:51.093 [2024-07-15 14:55:24.850421] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x181300 00:20:51.093 [2024-07-15 14:55:24.850428] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.093 [2024-07-15 14:55:24.850433] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.093 [2024-07-15 14:55:24.850449] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.093 [2024-07-15 14:55:24.850453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:20:51.094 [2024-07-15 14:55:24.850457] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x181300 00:20:51.094 [2024-07-15 14:55:24.850463] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.094 [2024-07-15 14:55:24.850469] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.094 [2024-07-15 14:55:24.850487] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.094 [2024-07-15 14:55:24.850491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:20:51.094 [2024-07-15 14:55:24.850495] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x181300 00:20:51.094 [2024-07-15 14:55:24.850502] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.094 [2024-07-15 14:55:24.850508] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.094 [2024-07-15 14:55:24.850532] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.094 [2024-07-15 14:55:24.850536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:20:51.094 [2024-07-15 14:55:24.850544] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x181300 00:20:51.094 [2024-07-15 14:55:24.850551] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.094 [2024-07-15 14:55:24.850557] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.094 [2024-07-15 14:55:24.850579] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.094 [2024-07-15 14:55:24.850583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:20:51.094 [2024-07-15 14:55:24.850587] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x181300 00:20:51.094 [2024-07-15 14:55:24.850594] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.094 [2024-07-15 14:55:24.850600] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.094 [2024-07-15 14:55:24.850618] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.094 [2024-07-15 14:55:24.850622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:20:51.094 [2024-07-15 14:55:24.850626] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x181300 00:20:51.094 [2024-07-15 14:55:24.850633] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.094 [2024-07-15 14:55:24.850638] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.094 [2024-07-15 14:55:24.850672] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.094 [2024-07-15 14:55:24.850676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:20:51.094 [2024-07-15 14:55:24.850680] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x181300 00:20:51.094 [2024-07-15 14:55:24.850686] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.094 [2024-07-15 14:55:24.850693] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.094 [2024-07-15 14:55:24.850711] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.094 [2024-07-15 14:55:24.850715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:20:51.094 [2024-07-15 14:55:24.850719] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x181300 00:20:51.094 [2024-07-15 14:55:24.850725] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.094 [2024-07-15 14:55:24.850731] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.094 [2024-07-15 14:55:24.850746] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.094 [2024-07-15 14:55:24.850750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:20:51.094 [2024-07-15 14:55:24.850755] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x181300 00:20:51.094 [2024-07-15 14:55:24.850761] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.094 [2024-07-15 14:55:24.850767] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.094 [2024-07-15 14:55:24.850788] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.094 [2024-07-15 14:55:24.850792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:20:51.094 [2024-07-15 14:55:24.850796] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x181300 00:20:51.094 [2024-07-15 14:55:24.850803] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.094 [2024-07-15 14:55:24.850810] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.094 [2024-07-15 14:55:24.850830] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.094 [2024-07-15 14:55:24.850834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:20:51.094 [2024-07-15 14:55:24.850838] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x181300 00:20:51.094 [2024-07-15 14:55:24.850845] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.094 [2024-07-15 14:55:24.850850] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.094 [2024-07-15 14:55:24.850870] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.094 [2024-07-15 14:55:24.850874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:20:51.094 [2024-07-15 14:55:24.850878] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x181300 00:20:51.094 [2024-07-15 14:55:24.850885] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.094 [2024-07-15 14:55:24.850890] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.094 [2024-07-15 14:55:24.850909] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.094 [2024-07-15 14:55:24.850913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:20:51.094 [2024-07-15 14:55:24.850917] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x181300 00:20:51.094 [2024-07-15 14:55:24.850924] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.094 [2024-07-15 14:55:24.850929] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.094 [2024-07-15 14:55:24.850952] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.095 [2024-07-15 14:55:24.850956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:20:51.095 [2024-07-15 14:55:24.850960] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x181300 00:20:51.095 [2024-07-15 14:55:24.850967] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.095 [2024-07-15 14:55:24.850972] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.095 [2024-07-15 14:55:24.850992] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.095 [2024-07-15 14:55:24.850996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:20:51.095 [2024-07-15 14:55:24.851000] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x181300 00:20:51.095 [2024-07-15 14:55:24.851007] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.095 [2024-07-15 14:55:24.851012] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.095 [2024-07-15 14:55:24.851029] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.095 [2024-07-15 14:55:24.851033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:20:51.095 [2024-07-15 14:55:24.851037] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x181300 00:20:51.095 [2024-07-15 14:55:24.851044] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.095 [2024-07-15 14:55:24.851051] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.095 [2024-07-15 14:55:24.851073] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.095 [2024-07-15 14:55:24.851077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:20:51.095 [2024-07-15 14:55:24.851081] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x181300 00:20:51.095 [2024-07-15 14:55:24.851088] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.095 [2024-07-15 14:55:24.851093] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.095 [2024-07-15 14:55:24.851115] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.095 [2024-07-15 14:55:24.851119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:20:51.095 [2024-07-15 14:55:24.851123] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x181300 00:20:51.095 [2024-07-15 14:55:24.851130] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.095 [2024-07-15 14:55:24.851135] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.095 [2024-07-15 14:55:24.851159] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.095 [2024-07-15 14:55:24.851163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:20:51.095 [2024-07-15 14:55:24.851168] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x181300 00:20:51.095 [2024-07-15 14:55:24.851174] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.095 [2024-07-15 14:55:24.851180] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.095 [2024-07-15 14:55:24.851200] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.095 [2024-07-15 14:55:24.851204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:20:51.095 [2024-07-15 14:55:24.851208] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x181300 00:20:51.095 [2024-07-15 14:55:24.851214] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.095 [2024-07-15 14:55:24.851220] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.095 [2024-07-15 14:55:24.851238] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.095 [2024-07-15 14:55:24.851243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:20:51.095 [2024-07-15 14:55:24.851247] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x181300 00:20:51.095 [2024-07-15 14:55:24.851253] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.095 [2024-07-15 14:55:24.851259] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.095 [2024-07-15 14:55:24.851283] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.095 [2024-07-15 14:55:24.851287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:20:51.095 [2024-07-15 14:55:24.851291] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x181300 00:20:51.095 [2024-07-15 14:55:24.851299] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.095 [2024-07-15 14:55:24.851305] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.095 [2024-07-15 14:55:24.851325] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.095 [2024-07-15 14:55:24.851329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:20:51.095 [2024-07-15 14:55:24.851333] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x181300 00:20:51.095 [2024-07-15 14:55:24.851339] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.095 [2024-07-15 14:55:24.851345] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.095 [2024-07-15 14:55:24.851369] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.095 [2024-07-15 14:55:24.851373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:20:51.095 [2024-07-15 14:55:24.851377] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x181300 00:20:51.095 [2024-07-15 14:55:24.851384] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.095 [2024-07-15 14:55:24.851390] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.095 [2024-07-15 14:55:24.851406] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.095 [2024-07-15 14:55:24.851410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:20:51.095 [2024-07-15 14:55:24.851415] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x181300 00:20:51.095 [2024-07-15 14:55:24.851421] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.095 [2024-07-15 14:55:24.851427] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.095 [2024-07-15 14:55:24.851445] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.095 [2024-07-15 14:55:24.851449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:20:51.095 [2024-07-15 14:55:24.851453] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x181300 00:20:51.095 [2024-07-15 14:55:24.851460] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.095 [2024-07-15 14:55:24.851466] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.095 [2024-07-15 14:55:24.851484] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.095 [2024-07-15 14:55:24.851488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:20:51.095 [2024-07-15 14:55:24.851492] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x181300 00:20:51.095 [2024-07-15 14:55:24.851498] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.095 [2024-07-15 14:55:24.851504] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.095 [2024-07-15 14:55:24.851527] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.095 [2024-07-15 14:55:24.851531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:20:51.095 [2024-07-15 14:55:24.851535] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x181300 00:20:51.095 [2024-07-15 14:55:24.851554] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.096 [2024-07-15 14:55:24.851560] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.096 [2024-07-15 14:55:24.851584] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.096 [2024-07-15 14:55:24.851589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:20:51.096 [2024-07-15 14:55:24.851593] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x181300 00:20:51.096 [2024-07-15 14:55:24.851599] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.096 [2024-07-15 14:55:24.851605] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.096 [2024-07-15 14:55:24.851626] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.096 [2024-07-15 14:55:24.851630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:20:51.096 [2024-07-15 14:55:24.851634] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x181300 00:20:51.096 [2024-07-15 14:55:24.851641] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.096 [2024-07-15 14:55:24.851647] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.096 [2024-07-15 14:55:24.851668] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.096 [2024-07-15 14:55:24.851672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:20:51.096 [2024-07-15 14:55:24.851676] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x181300 00:20:51.096 [2024-07-15 14:55:24.851682] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.096 [2024-07-15 14:55:24.851688] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.096 [2024-07-15 14:55:24.851711] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.096 [2024-07-15 14:55:24.851715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:20:51.096 [2024-07-15 14:55:24.851719] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x181300 00:20:51.096 [2024-07-15 14:55:24.851726] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.096 [2024-07-15 14:55:24.851731] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.096 [2024-07-15 14:55:24.851750] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.096 [2024-07-15 14:55:24.851754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:20:51.096 [2024-07-15 14:55:24.851759] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x181300 00:20:51.096 [2024-07-15 14:55:24.851765] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.096 [2024-07-15 14:55:24.851771] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.096 [2024-07-15 14:55:24.851792] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.096 [2024-07-15 14:55:24.851796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:20:51.096 [2024-07-15 14:55:24.851802] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x181300 00:20:51.096 [2024-07-15 14:55:24.851809] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.096 [2024-07-15 14:55:24.851814] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.096 [2024-07-15 14:55:24.851829] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.096 [2024-07-15 14:55:24.851833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:20:51.096 [2024-07-15 14:55:24.851837] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x181300 00:20:51.096 [2024-07-15 14:55:24.851844] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.096 [2024-07-15 14:55:24.851850] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.096 [2024-07-15 14:55:24.851865] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.096 [2024-07-15 14:55:24.851869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:20:51.096 [2024-07-15 14:55:24.851873] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x181300 00:20:51.096 [2024-07-15 14:55:24.851880] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.096 [2024-07-15 14:55:24.851886] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.096 [2024-07-15 14:55:24.851905] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.096 [2024-07-15 14:55:24.851909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:20:51.096 [2024-07-15 14:55:24.851914] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x181300 00:20:51.096 [2024-07-15 14:55:24.851920] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.096 [2024-07-15 14:55:24.851926] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.096 [2024-07-15 14:55:24.851945] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.096 [2024-07-15 14:55:24.851949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:20:51.096 [2024-07-15 14:55:24.851954] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x181300 00:20:51.096 [2024-07-15 14:55:24.851960] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.096 [2024-07-15 14:55:24.851966] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.096 [2024-07-15 14:55:24.851984] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.096 [2024-07-15 14:55:24.851988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:20:51.096 [2024-07-15 14:55:24.851992] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x181300 00:20:51.096 [2024-07-15 14:55:24.851999] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.096 [2024-07-15 14:55:24.852005] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.096 [2024-07-15 14:55:24.852026] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.096 [2024-07-15 14:55:24.852030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:20:51.096 [2024-07-15 14:55:24.852036] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x181300 00:20:51.096 [2024-07-15 14:55:24.852042] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.096 [2024-07-15 14:55:24.852048] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.096 [2024-07-15 14:55:24.852063] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.096 [2024-07-15 14:55:24.852067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:20:51.096 [2024-07-15 14:55:24.852071] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x181300 00:20:51.096 [2024-07-15 14:55:24.852078] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.096 [2024-07-15 14:55:24.852084] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.096 [2024-07-15 14:55:24.852102] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.096 [2024-07-15 14:55:24.852106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:20:51.096 [2024-07-15 14:55:24.852110] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x181300 00:20:51.096 [2024-07-15 14:55:24.852116] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.096 [2024-07-15 14:55:24.852122] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.096 [2024-07-15 14:55:24.852140] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.096 [2024-07-15 14:55:24.852144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:20:51.096 [2024-07-15 14:55:24.852148] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x181300 00:20:51.097 [2024-07-15 14:55:24.852155] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.097 [2024-07-15 14:55:24.852161] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.097 [2024-07-15 14:55:24.852185] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.097 [2024-07-15 14:55:24.852189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:20:51.097 [2024-07-15 14:55:24.852193] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x181300 00:20:51.097 [2024-07-15 14:55:24.852200] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.097 [2024-07-15 14:55:24.852205] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.097 [2024-07-15 14:55:24.852223] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.097 [2024-07-15 14:55:24.852228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:20:51.097 [2024-07-15 14:55:24.852232] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x181300 00:20:51.097 [2024-07-15 14:55:24.852238] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.097 [2024-07-15 14:55:24.852244] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.097 [2024-07-15 14:55:24.852259] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.097 [2024-07-15 14:55:24.852264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:20:51.097 [2024-07-15 14:55:24.852269] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x181300 00:20:51.097 [2024-07-15 14:55:24.852275] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.097 [2024-07-15 14:55:24.852281] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.097 [2024-07-15 14:55:24.852301] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.097 [2024-07-15 14:55:24.852305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:20:51.097 [2024-07-15 14:55:24.852309] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x181300 00:20:51.097 [2024-07-15 14:55:24.852315] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.097 [2024-07-15 14:55:24.852321] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.097 [2024-07-15 14:55:24.852341] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.097 [2024-07-15 14:55:24.852345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:20:51.097 [2024-07-15 14:55:24.852349] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x181300 00:20:51.097 [2024-07-15 14:55:24.852356] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.097 [2024-07-15 14:55:24.852361] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.097 [2024-07-15 14:55:24.852379] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.097 [2024-07-15 14:55:24.852384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:20:51.097 [2024-07-15 14:55:24.852388] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x181300 00:20:51.097 [2024-07-15 14:55:24.852394] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.097 [2024-07-15 14:55:24.852400] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.097 [2024-07-15 14:55:24.852419] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.097 [2024-07-15 14:55:24.852424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:20:51.097 [2024-07-15 14:55:24.852428] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x181300 00:20:51.097 [2024-07-15 14:55:24.852434] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.097 [2024-07-15 14:55:24.852440] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.097 [2024-07-15 14:55:24.852461] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.097 [2024-07-15 14:55:24.852465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:20:51.097 [2024-07-15 14:55:24.852470] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x181300 00:20:51.097 [2024-07-15 14:55:24.852476] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.097 [2024-07-15 14:55:24.852482] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.097 [2024-07-15 14:55:24.852501] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.097 [2024-07-15 14:55:24.852508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:20:51.097 [2024-07-15 14:55:24.852512] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x181300 00:20:51.097 [2024-07-15 14:55:24.852519] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.097 [2024-07-15 14:55:24.852524] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.097 [2024-07-15 14:55:24.856546] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.097 [2024-07-15 14:55:24.856553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:20:51.097 [2024-07-15 14:55:24.856557] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x181300 00:20:51.097 [2024-07-15 14:55:24.856564] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.097 [2024-07-15 14:55:24.856570] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.097 [2024-07-15 14:55:24.856591] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.097 [2024-07-15 14:55:24.856595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:0019 p:0 m:0 dnr:0 00:20:51.097 [2024-07-15 14:55:24.856599] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x181300 00:20:51.097 [2024-07-15 14:55:24.856604] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:20:51.097 128 00:20:51.097 Transport Service Identifier: 4420 00:20:51.097 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:20:51.097 Transport Address: 192.168.100.8 00:20:51.097 Transport Specific Address Subtype - RDMA 00:20:51.097 RDMA QP Service Type: 1 (Reliable Connected) 00:20:51.097 RDMA Provider Type: 1 (No provider specified) 00:20:51.097 RDMA CM Service: 1 (RDMA_CM) 00:20:51.097 14:55:24 nvmf_rdma.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:20:51.097 [2024-07-15 14:55:24.927136] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:20:51.097 [2024-07-15 14:55:24.927169] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2904180 ] 00:20:51.379 EAL: No free 2048 kB hugepages reported on node 1 00:20:51.379 [2024-07-15 14:55:24.966928] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:20:51.379 [2024-07-15 14:55:24.966995] nvme_rdma.c:2192:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:20:51.379 [2024-07-15 14:55:24.967009] nvme_rdma.c:1211:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:20:51.379 [2024-07-15 14:55:24.967012] nvme_rdma.c:1215:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:20:51.379 [2024-07-15 14:55:24.967034] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:20:51.379 [2024-07-15 14:55:24.977929] nvme_rdma.c: 430:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:20:51.379 [2024-07-15 14:55:24.992217] nvme_rdma.c:1100:nvme_rdma_connect_established: *DEBUG*: rc =0 00:20:51.380 [2024-07-15 14:55:24.992228] nvme_rdma.c:1105:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:20:51.380 [2024-07-15 14:55:24.992234] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x181300 00:20:51.380 [2024-07-15 14:55:24.992239] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x181300 00:20:51.380 [2024-07-15 14:55:24.992243] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x181300 00:20:51.380 [2024-07-15 14:55:24.992248] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x181300 00:20:51.380 [2024-07-15 14:55:24.992252] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x181300 00:20:51.380 [2024-07-15 14:55:24.992256] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x181300 00:20:51.380 [2024-07-15 14:55:24.992261] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x181300 00:20:51.380 [2024-07-15 14:55:24.992265] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x181300 00:20:51.380 [2024-07-15 14:55:24.992269] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x181300 00:20:51.380 [2024-07-15 14:55:24.992273] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x181300 00:20:51.380 [2024-07-15 14:55:24.992278] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x181300 00:20:51.380 [2024-07-15 14:55:24.992282] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x181300 00:20:51.380 [2024-07-15 14:55:24.992286] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x181300 00:20:51.380 [2024-07-15 14:55:24.992290] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x181300 00:20:51.380 [2024-07-15 14:55:24.992295] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x181300 00:20:51.380 [2024-07-15 14:55:24.992299] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x181300 00:20:51.380 [2024-07-15 14:55:24.992303] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x181300 00:20:51.380 [2024-07-15 14:55:24.992307] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x181300 00:20:51.380 [2024-07-15 14:55:24.992312] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x181300 00:20:51.380 [2024-07-15 14:55:24.992316] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x181300 00:20:51.380 [2024-07-15 14:55:24.992320] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x181300 00:20:51.380 [2024-07-15 14:55:24.992324] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x181300 00:20:51.380 [2024-07-15 14:55:24.992329] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x181300 00:20:51.380 [2024-07-15 14:55:24.992333] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x181300 00:20:51.380 [2024-07-15 14:55:24.992337] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x181300 00:20:51.380 [2024-07-15 14:55:24.992341] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x181300 00:20:51.380 [2024-07-15 14:55:24.992346] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x181300 00:20:51.380 [2024-07-15 14:55:24.992350] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x181300 00:20:51.380 [2024-07-15 14:55:24.992354] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x181300 00:20:51.380 [2024-07-15 14:55:24.992358] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x181300 00:20:51.380 [2024-07-15 14:55:24.992365] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x181300 00:20:51.380 [2024-07-15 14:55:24.992369] nvme_rdma.c:1119:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:20:51.380 [2024-07-15 14:55:24.992373] nvme_rdma.c:1122:nvme_rdma_connect_established: *DEBUG*: rc =0 00:20:51.380 [2024-07-15 14:55:24.992376] nvme_rdma.c:1127:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:20:51.380 [2024-07-15 14:55:24.992389] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181300 00:20:51.380 [2024-07-15 14:55:24.992399] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf1c0 len:0x400 key:0x181300 00:20:51.380 [2024-07-15 14:55:24.997544] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.380 [2024-07-15 14:55:24.997552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:20:51.380 [2024-07-15 14:55:24.997558] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x181300 00:20:51.380 [2024-07-15 14:55:24.997563] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:51.380 [2024-07-15 14:55:24.997568] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:20:51.380 [2024-07-15 14:55:24.997573] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:20:51.380 [2024-07-15 14:55:24.997584] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181300 00:20:51.380 [2024-07-15 14:55:24.997590] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.380 [2024-07-15 14:55:24.997607] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.380 [2024-07-15 14:55:24.997612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:20:51.380 [2024-07-15 14:55:24.997616] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:20:51.380 [2024-07-15 14:55:24.997620] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x181300 00:20:51.380 [2024-07-15 14:55:24.997625] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:20:51.380 [2024-07-15 14:55:24.997631] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181300 00:20:51.380 [2024-07-15 14:55:24.997637] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.380 [2024-07-15 14:55:24.997654] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.380 [2024-07-15 14:55:24.997658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:20:51.380 [2024-07-15 14:55:24.997663] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:20:51.380 [2024-07-15 14:55:24.997667] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x181300 00:20:51.380 [2024-07-15 14:55:24.997672] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:20:51.380 [2024-07-15 14:55:24.997678] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181300 00:20:51.380 [2024-07-15 14:55:24.997684] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.380 [2024-07-15 14:55:24.997698] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.380 [2024-07-15 14:55:24.997702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:51.380 [2024-07-15 14:55:24.997709] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:51.380 [2024-07-15 14:55:24.997713] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x181300 00:20:51.380 [2024-07-15 14:55:24.997720] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181300 00:20:51.380 [2024-07-15 14:55:24.997726] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.380 [2024-07-15 14:55:24.997746] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.380 [2024-07-15 14:55:24.997751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:51.380 [2024-07-15 14:55:24.997755] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:20:51.380 [2024-07-15 14:55:24.997759] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:20:51.380 [2024-07-15 14:55:24.997763] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x181300 00:20:51.380 [2024-07-15 14:55:24.997768] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:51.380 [2024-07-15 14:55:24.997872] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:20:51.380 [2024-07-15 14:55:24.997876] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:51.380 [2024-07-15 14:55:24.997883] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181300 00:20:51.380 [2024-07-15 14:55:24.997889] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.380 [2024-07-15 14:55:24.997914] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.380 [2024-07-15 14:55:24.997918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:51.380 [2024-07-15 14:55:24.997922] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:51.380 [2024-07-15 14:55:24.997926] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x181300 00:20:51.380 [2024-07-15 14:55:24.997932] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181300 00:20:51.380 [2024-07-15 14:55:24.997938] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.380 [2024-07-15 14:55:24.997955] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.380 [2024-07-15 14:55:24.997960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:20:51.380 [2024-07-15 14:55:24.997964] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:51.380 [2024-07-15 14:55:24.997967] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:20:51.380 [2024-07-15 14:55:24.997972] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x181300 00:20:51.380 [2024-07-15 14:55:24.997977] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:20:51.380 [2024-07-15 14:55:24.997988] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:20:51.380 [2024-07-15 14:55:24.997998] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181300 00:20:51.380 [2024-07-15 14:55:24.998004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x181300 00:20:51.380 [2024-07-15 14:55:24.998043] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.380 [2024-07-15 14:55:24.998047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:51.380 [2024-07-15 14:55:24.998054] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:20:51.380 [2024-07-15 14:55:24.998058] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:20:51.380 [2024-07-15 14:55:24.998062] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:20:51.380 [2024-07-15 14:55:24.998066] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:20:51.381 [2024-07-15 14:55:24.998070] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:20:51.381 [2024-07-15 14:55:24.998074] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:20:51.381 [2024-07-15 14:55:24.998077] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x181300 00:20:51.381 [2024-07-15 14:55:24.998083] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:20:51.381 [2024-07-15 14:55:24.998089] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181300 00:20:51.381 [2024-07-15 14:55:24.998094] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.381 [2024-07-15 14:55:24.998111] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.381 [2024-07-15 14:55:24.998116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:51.381 [2024-07-15 14:55:24.998122] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x181300 00:20:51.381 [2024-07-15 14:55:24.998127] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.381 [2024-07-15 14:55:24.998132] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0600 length 0x40 lkey 0x181300 00:20:51.381 [2024-07-15 14:55:24.998137] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.381 [2024-07-15 14:55:24.998142] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.381 [2024-07-15 14:55:24.998147] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.381 [2024-07-15 14:55:24.998152] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x181300 00:20:51.381 [2024-07-15 14:55:24.998157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.381 [2024-07-15 14:55:24.998161] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:20:51.381 [2024-07-15 14:55:24.998165] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x181300 00:20:51.381 [2024-07-15 14:55:24.998173] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:51.381 [2024-07-15 14:55:24.998179] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181300 00:20:51.381 [2024-07-15 14:55:24.998186] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.381 [2024-07-15 14:55:24.998201] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.381 [2024-07-15 14:55:24.998206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:20:51.381 [2024-07-15 14:55:24.998210] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:20:51.381 [2024-07-15 14:55:24.998216] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:20:51.381 [2024-07-15 14:55:24.998220] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x181300 00:20:51.381 [2024-07-15 14:55:24.998226] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:20:51.381 [2024-07-15 14:55:24.998231] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:20:51.381 [2024-07-15 14:55:24.998236] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181300 00:20:51.381 [2024-07-15 14:55:24.998242] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.381 [2024-07-15 14:55:24.998267] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.381 [2024-07-15 14:55:24.998271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:7e007e sqhd:000b p:0 m:0 dnr:0 00:20:51.381 [2024-07-15 14:55:24.998319] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:20:51.381 [2024-07-15 14:55:24.998323] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x181300 00:20:51.381 [2024-07-15 14:55:24.998329] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:20:51.381 [2024-07-15 14:55:24.998336] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181300 00:20:51.381 [2024-07-15 14:55:24.998342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0x181300 00:20:51.381 [2024-07-15 14:55:24.998367] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.381 [2024-07-15 14:55:24.998371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:51.381 [2024-07-15 14:55:24.998381] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:20:51.381 [2024-07-15 14:55:24.998390] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:20:51.381 [2024-07-15 14:55:24.998395] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x181300 00:20:51.381 [2024-07-15 14:55:24.998400] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:20:51.381 [2024-07-15 14:55:24.998406] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181300 00:20:51.381 [2024-07-15 14:55:24.998412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000000 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x181300 00:20:51.381 [2024-07-15 14:55:24.998447] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.381 [2024-07-15 14:55:24.998451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:51.381 [2024-07-15 14:55:24.998462] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:20:51.381 [2024-07-15 14:55:24.998467] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x181300 00:20:51.381 [2024-07-15 14:55:24.998473] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:20:51.381 [2024-07-15 14:55:24.998479] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181300 00:20:51.381 [2024-07-15 14:55:24.998485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x181300 00:20:51.381 [2024-07-15 14:55:24.998507] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.381 [2024-07-15 14:55:24.998511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:51.381 [2024-07-15 14:55:24.998517] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:20:51.381 [2024-07-15 14:55:24.998521] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x181300 00:20:51.381 [2024-07-15 14:55:24.998526] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:20:51.381 [2024-07-15 14:55:24.998532] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:20:51.381 [2024-07-15 14:55:24.998537] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:20:51.381 [2024-07-15 14:55:24.998546] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:20:51.381 [2024-07-15 14:55:24.998551] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:20:51.381 [2024-07-15 14:55:24.998555] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:20:51.381 [2024-07-15 14:55:24.998559] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:20:51.381 [2024-07-15 14:55:24.998563] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:20:51.381 [2024-07-15 14:55:24.998575] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181300 00:20:51.381 [2024-07-15 14:55:24.998581] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:0 cdw10:00000001 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.381 [2024-07-15 14:55:24.998586] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x181300 00:20:51.381 [2024-07-15 14:55:24.998591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.381 [2024-07-15 14:55:24.998600] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.381 [2024-07-15 14:55:24.998604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:51.381 [2024-07-15 14:55:24.998609] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x181300 00:20:51.381 [2024-07-15 14:55:24.998615] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181300 00:20:51.381 [2024-07-15 14:55:24.998621] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:0 cdw10:00000002 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.381 [2024-07-15 14:55:24.998628] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.381 [2024-07-15 14:55:24.998633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:51.381 [2024-07-15 14:55:24.998637] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x181300 00:20:51.381 [2024-07-15 14:55:24.998645] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.381 [2024-07-15 14:55:24.998650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:51.381 [2024-07-15 14:55:24.998654] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x181300 00:20:51.381 [2024-07-15 14:55:24.998660] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181300 00:20:51.381 [2024-07-15 14:55:24.998666] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:0 cdw10:00000004 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.381 [2024-07-15 14:55:24.998685] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.381 [2024-07-15 14:55:24.998689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:51.381 [2024-07-15 14:55:24.998694] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x181300 00:20:51.381 [2024-07-15 14:55:24.998700] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181300 00:20:51.381 [2024-07-15 14:55:24.998706] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.381 [2024-07-15 14:55:24.998725] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.381 [2024-07-15 14:55:24.998729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:7e007e sqhd:0013 p:0 m:0 dnr:0 00:20:51.381 [2024-07-15 14:55:24.998733] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x181300 00:20:51.381 [2024-07-15 14:55:24.998744] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x181300 00:20:51.381 [2024-07-15 14:55:24.998750] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c9000 len:0x2000 key:0x181300 00:20:51.382 [2024-07-15 14:55:24.998757] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x181300 00:20:51.382 [2024-07-15 14:55:24.998762] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x200 key:0x181300 00:20:51.382 [2024-07-15 14:55:24.998769] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b00 length 0x40 lkey 0x181300 00:20:51.382 [2024-07-15 14:55:24.998774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x200 key:0x181300 00:20:51.382 [2024-07-15 14:55:24.998783] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0c40 length 0x40 lkey 0x181300 00:20:51.382 [2024-07-15 14:55:24.998788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c7000 len:0x1000 key:0x181300 00:20:51.382 [2024-07-15 14:55:24.998794] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.382 [2024-07-15 14:55:24.998799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:51.382 [2024-07-15 14:55:24.998809] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x181300 00:20:51.382 [2024-07-15 14:55:24.998815] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.382 [2024-07-15 14:55:24.998819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:51.382 [2024-07-15 14:55:24.998826] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x181300 00:20:51.382 [2024-07-15 14:55:24.998831] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.382 [2024-07-15 14:55:24.998834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:51.382 [2024-07-15 14:55:24.998839] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x181300 00:20:51.382 [2024-07-15 14:55:24.998855] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.382 [2024-07-15 14:55:24.998860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:51.382 [2024-07-15 14:55:24.998866] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x181300 00:20:51.382 ===================================================== 00:20:51.382 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:20:51.382 ===================================================== 00:20:51.382 Controller Capabilities/Features 00:20:51.382 ================================ 00:20:51.382 Vendor ID: 8086 00:20:51.382 Subsystem Vendor ID: 8086 00:20:51.382 Serial Number: SPDK00000000000001 00:20:51.382 Model Number: SPDK bdev Controller 00:20:51.382 Firmware Version: 24.09 00:20:51.382 Recommended Arb Burst: 6 00:20:51.382 IEEE OUI Identifier: e4 d2 5c 00:20:51.382 Multi-path I/O 00:20:51.382 May have multiple subsystem ports: Yes 00:20:51.382 May have multiple controllers: Yes 00:20:51.382 Associated with SR-IOV VF: No 00:20:51.382 Max Data Transfer Size: 131072 00:20:51.382 Max Number of Namespaces: 32 00:20:51.382 Max Number of I/O Queues: 127 00:20:51.382 NVMe Specification Version (VS): 1.3 00:20:51.382 NVMe Specification Version (Identify): 1.3 00:20:51.382 Maximum Queue Entries: 128 00:20:51.382 Contiguous Queues Required: Yes 00:20:51.382 Arbitration Mechanisms Supported 00:20:51.382 Weighted Round Robin: Not Supported 00:20:51.382 Vendor Specific: Not Supported 00:20:51.382 Reset Timeout: 15000 ms 00:20:51.382 Doorbell Stride: 4 bytes 00:20:51.382 NVM Subsystem Reset: Not Supported 00:20:51.382 Command Sets Supported 00:20:51.382 NVM Command Set: Supported 00:20:51.382 Boot Partition: Not Supported 00:20:51.382 Memory Page Size Minimum: 4096 bytes 00:20:51.382 Memory Page Size Maximum: 4096 bytes 00:20:51.382 Persistent Memory Region: Not Supported 00:20:51.382 Optional Asynchronous Events Supported 00:20:51.382 Namespace Attribute Notices: Supported 00:20:51.382 Firmware Activation Notices: Not Supported 00:20:51.382 ANA Change Notices: Not Supported 00:20:51.382 PLE Aggregate Log Change Notices: Not Supported 00:20:51.382 LBA Status Info Alert Notices: Not Supported 00:20:51.382 EGE Aggregate Log Change Notices: Not Supported 00:20:51.382 Normal NVM Subsystem Shutdown event: Not Supported 00:20:51.382 Zone Descriptor Change Notices: Not Supported 00:20:51.382 Discovery Log Change Notices: Not Supported 00:20:51.382 Controller Attributes 00:20:51.382 128-bit Host Identifier: Supported 00:20:51.382 Non-Operational Permissive Mode: Not Supported 00:20:51.382 NVM Sets: Not Supported 00:20:51.382 Read Recovery Levels: Not Supported 00:20:51.382 Endurance Groups: Not Supported 00:20:51.382 Predictable Latency Mode: Not Supported 00:20:51.382 Traffic Based Keep ALive: Not Supported 00:20:51.382 Namespace Granularity: Not Supported 00:20:51.382 SQ Associations: Not Supported 00:20:51.382 UUID List: Not Supported 00:20:51.382 Multi-Domain Subsystem: Not Supported 00:20:51.382 Fixed Capacity Management: Not Supported 00:20:51.382 Variable Capacity Management: Not Supported 00:20:51.382 Delete Endurance Group: Not Supported 00:20:51.382 Delete NVM Set: Not Supported 00:20:51.382 Extended LBA Formats Supported: Not Supported 00:20:51.382 Flexible Data Placement Supported: Not Supported 00:20:51.382 00:20:51.382 Controller Memory Buffer Support 00:20:51.382 ================================ 00:20:51.382 Supported: No 00:20:51.382 00:20:51.382 Persistent Memory Region Support 00:20:51.382 ================================ 00:20:51.382 Supported: No 00:20:51.382 00:20:51.382 Admin Command Set Attributes 00:20:51.382 ============================ 00:20:51.382 Security Send/Receive: Not Supported 00:20:51.382 Format NVM: Not Supported 00:20:51.382 Firmware Activate/Download: Not Supported 00:20:51.382 Namespace Management: Not Supported 00:20:51.382 Device Self-Test: Not Supported 00:20:51.382 Directives: Not Supported 00:20:51.382 NVMe-MI: Not Supported 00:20:51.382 Virtualization Management: Not Supported 00:20:51.382 Doorbell Buffer Config: Not Supported 00:20:51.382 Get LBA Status Capability: Not Supported 00:20:51.382 Command & Feature Lockdown Capability: Not Supported 00:20:51.382 Abort Command Limit: 4 00:20:51.382 Async Event Request Limit: 4 00:20:51.382 Number of Firmware Slots: N/A 00:20:51.382 Firmware Slot 1 Read-Only: N/A 00:20:51.382 Firmware Activation Without Reset: N/A 00:20:51.382 Multiple Update Detection Support: N/A 00:20:51.382 Firmware Update Granularity: No Information Provided 00:20:51.382 Per-Namespace SMART Log: No 00:20:51.382 Asymmetric Namespace Access Log Page: Not Supported 00:20:51.382 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:20:51.382 Command Effects Log Page: Supported 00:20:51.382 Get Log Page Extended Data: Supported 00:20:51.382 Telemetry Log Pages: Not Supported 00:20:51.382 Persistent Event Log Pages: Not Supported 00:20:51.382 Supported Log Pages Log Page: May Support 00:20:51.382 Commands Supported & Effects Log Page: Not Supported 00:20:51.382 Feature Identifiers & Effects Log Page:May Support 00:20:51.382 NVMe-MI Commands & Effects Log Page: May Support 00:20:51.382 Data Area 4 for Telemetry Log: Not Supported 00:20:51.382 Error Log Page Entries Supported: 128 00:20:51.382 Keep Alive: Supported 00:20:51.382 Keep Alive Granularity: 10000 ms 00:20:51.382 00:20:51.382 NVM Command Set Attributes 00:20:51.382 ========================== 00:20:51.382 Submission Queue Entry Size 00:20:51.382 Max: 64 00:20:51.382 Min: 64 00:20:51.382 Completion Queue Entry Size 00:20:51.382 Max: 16 00:20:51.382 Min: 16 00:20:51.382 Number of Namespaces: 32 00:20:51.382 Compare Command: Supported 00:20:51.382 Write Uncorrectable Command: Not Supported 00:20:51.382 Dataset Management Command: Supported 00:20:51.382 Write Zeroes Command: Supported 00:20:51.382 Set Features Save Field: Not Supported 00:20:51.382 Reservations: Supported 00:20:51.382 Timestamp: Not Supported 00:20:51.382 Copy: Supported 00:20:51.382 Volatile Write Cache: Present 00:20:51.382 Atomic Write Unit (Normal): 1 00:20:51.382 Atomic Write Unit (PFail): 1 00:20:51.382 Atomic Compare & Write Unit: 1 00:20:51.382 Fused Compare & Write: Supported 00:20:51.382 Scatter-Gather List 00:20:51.382 SGL Command Set: Supported 00:20:51.382 SGL Keyed: Supported 00:20:51.382 SGL Bit Bucket Descriptor: Not Supported 00:20:51.382 SGL Metadata Pointer: Not Supported 00:20:51.382 Oversized SGL: Not Supported 00:20:51.382 SGL Metadata Address: Not Supported 00:20:51.382 SGL Offset: Supported 00:20:51.382 Transport SGL Data Block: Not Supported 00:20:51.382 Replay Protected Memory Block: Not Supported 00:20:51.382 00:20:51.382 Firmware Slot Information 00:20:51.382 ========================= 00:20:51.382 Active slot: 1 00:20:51.382 Slot 1 Firmware Revision: 24.09 00:20:51.382 00:20:51.382 00:20:51.382 Commands Supported and Effects 00:20:51.382 ============================== 00:20:51.382 Admin Commands 00:20:51.382 -------------- 00:20:51.382 Get Log Page (02h): Supported 00:20:51.382 Identify (06h): Supported 00:20:51.382 Abort (08h): Supported 00:20:51.382 Set Features (09h): Supported 00:20:51.382 Get Features (0Ah): Supported 00:20:51.382 Asynchronous Event Request (0Ch): Supported 00:20:51.382 Keep Alive (18h): Supported 00:20:51.382 I/O Commands 00:20:51.382 ------------ 00:20:51.382 Flush (00h): Supported LBA-Change 00:20:51.382 Write (01h): Supported LBA-Change 00:20:51.382 Read (02h): Supported 00:20:51.382 Compare (05h): Supported 00:20:51.382 Write Zeroes (08h): Supported LBA-Change 00:20:51.382 Dataset Management (09h): Supported LBA-Change 00:20:51.382 Copy (19h): Supported LBA-Change 00:20:51.382 00:20:51.382 Error Log 00:20:51.382 ========= 00:20:51.382 00:20:51.382 Arbitration 00:20:51.382 =========== 00:20:51.382 Arbitration Burst: 1 00:20:51.382 00:20:51.382 Power Management 00:20:51.383 ================ 00:20:51.383 Number of Power States: 1 00:20:51.383 Current Power State: Power State #0 00:20:51.383 Power State #0: 00:20:51.383 Max Power: 0.00 W 00:20:51.383 Non-Operational State: Operational 00:20:51.383 Entry Latency: Not Reported 00:20:51.383 Exit Latency: Not Reported 00:20:51.383 Relative Read Throughput: 0 00:20:51.383 Relative Read Latency: 0 00:20:51.383 Relative Write Throughput: 0 00:20:51.383 Relative Write Latency: 0 00:20:51.383 Idle Power: Not Reported 00:20:51.383 Active Power: Not Reported 00:20:51.383 Non-Operational Permissive Mode: Not Supported 00:20:51.383 00:20:51.383 Health Information 00:20:51.383 ================== 00:20:51.383 Critical Warnings: 00:20:51.383 Available Spare Space: OK 00:20:51.383 Temperature: OK 00:20:51.383 Device Reliability: OK 00:20:51.383 Read Only: No 00:20:51.383 Volatile Memory Backup: OK 00:20:51.383 Current Temperature: 0 Kelvin (-273 Celsius) 00:20:51.383 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:20:51.383 Available Spare: 0% 00:20:51.383 Available Spare Threshold: 0% 00:20:51.383 Life Percentage [2024-07-15 14:55:24.998940] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0c40 length 0x40 lkey 0x181300 00:20:51.383 [2024-07-15 14:55:24.998947] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.383 [2024-07-15 14:55:24.998969] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.383 [2024-07-15 14:55:24.998974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:51.383 [2024-07-15 14:55:24.998978] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x181300 00:20:51.383 [2024-07-15 14:55:24.999000] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:20:51.383 [2024-07-15 14:55:24.999007] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 22788 doesn't match qid 00:20:51.383 [2024-07-15 14:55:24.999019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32586 cdw0:5 sqhd:1ad0 p:0 m:0 dnr:0 00:20:51.383 [2024-07-15 14:55:24.999024] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 22788 doesn't match qid 00:20:51.383 [2024-07-15 14:55:24.999030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32586 cdw0:5 sqhd:1ad0 p:0 m:0 dnr:0 00:20:51.383 [2024-07-15 14:55:24.999035] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 22788 doesn't match qid 00:20:51.383 [2024-07-15 14:55:24.999040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32586 cdw0:5 sqhd:1ad0 p:0 m:0 dnr:0 00:20:51.383 [2024-07-15 14:55:24.999045] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 22788 doesn't match qid 00:20:51.383 [2024-07-15 14:55:24.999050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32586 cdw0:5 sqhd:1ad0 p:0 m:0 dnr:0 00:20:51.383 [2024-07-15 14:55:24.999057] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x181300 00:20:51.383 [2024-07-15 14:55:24.999063] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.383 [2024-07-15 14:55:24.999079] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.383 [2024-07-15 14:55:24.999083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0019 p:0 m:0 dnr:0 00:20:51.383 [2024-07-15 14:55:24.999090] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.383 [2024-07-15 14:55:24.999095] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.383 [2024-07-15 14:55:24.999100] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x181300 00:20:51.383 [2024-07-15 14:55:24.999118] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.383 [2024-07-15 14:55:24.999123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:51.383 [2024-07-15 14:55:24.999127] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:20:51.383 [2024-07-15 14:55:24.999131] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:20:51.383 [2024-07-15 14:55:24.999135] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x181300 00:20:51.383 [2024-07-15 14:55:24.999141] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.383 [2024-07-15 14:55:24.999147] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.383 [2024-07-15 14:55:24.999167] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.383 [2024-07-15 14:55:24.999171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:20:51.383 [2024-07-15 14:55:24.999177] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x181300 00:20:51.383 [2024-07-15 14:55:24.999185] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.383 [2024-07-15 14:55:24.999192] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.383 [2024-07-15 14:55:24.999208] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.383 [2024-07-15 14:55:24.999213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:20:51.383 [2024-07-15 14:55:24.999217] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x181300 00:20:51.383 [2024-07-15 14:55:24.999224] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.383 [2024-07-15 14:55:24.999230] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.383 [2024-07-15 14:55:24.999245] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.383 [2024-07-15 14:55:24.999250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:20:51.383 [2024-07-15 14:55:24.999256] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x181300 00:20:51.383 [2024-07-15 14:55:24.999263] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.383 [2024-07-15 14:55:24.999269] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.383 [2024-07-15 14:55:24.999283] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.383 [2024-07-15 14:55:24.999287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:20:51.383 [2024-07-15 14:55:24.999292] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x181300 00:20:51.383 [2024-07-15 14:55:24.999298] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.383 [2024-07-15 14:55:24.999305] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.383 [2024-07-15 14:55:24.999325] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.383 [2024-07-15 14:55:24.999330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:20:51.383 [2024-07-15 14:55:24.999335] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x181300 00:20:51.383 [2024-07-15 14:55:24.999345] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.383 [2024-07-15 14:55:24.999351] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.383 [2024-07-15 14:55:24.999375] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.383 [2024-07-15 14:55:24.999380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:20:51.383 [2024-07-15 14:55:24.999385] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x181300 00:20:51.383 [2024-07-15 14:55:24.999392] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.383 [2024-07-15 14:55:24.999397] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.383 [2024-07-15 14:55:24.999414] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.383 [2024-07-15 14:55:24.999419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:20:51.383 [2024-07-15 14:55:24.999423] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x181300 00:20:51.383 [2024-07-15 14:55:24.999430] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.383 [2024-07-15 14:55:24.999436] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.383 [2024-07-15 14:55:24.999451] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.383 [2024-07-15 14:55:24.999455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:20:51.383 [2024-07-15 14:55:24.999460] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x181300 00:20:51.383 [2024-07-15 14:55:24.999467] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.383 [2024-07-15 14:55:24.999473] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.383 [2024-07-15 14:55:24.999494] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.383 [2024-07-15 14:55:24.999498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:20:51.383 [2024-07-15 14:55:24.999503] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x181300 00:20:51.383 [2024-07-15 14:55:24.999510] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.383 [2024-07-15 14:55:24.999516] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.383 [2024-07-15 14:55:24.999535] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.383 [2024-07-15 14:55:24.999544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:20:51.383 [2024-07-15 14:55:24.999549] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x181300 00:20:51.383 [2024-07-15 14:55:24.999555] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.383 [2024-07-15 14:55:24.999561] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.383 [2024-07-15 14:55:24.999581] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.383 [2024-07-15 14:55:24.999585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:20:51.383 [2024-07-15 14:55:24.999591] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x181300 00:20:51.383 [2024-07-15 14:55:24.999598] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.383 [2024-07-15 14:55:24.999604] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.383 [2024-07-15 14:55:24.999622] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.383 [2024-07-15 14:55:24.999626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:20:51.384 [2024-07-15 14:55:24.999630] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x181300 00:20:51.384 [2024-07-15 14:55:24.999637] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.384 [2024-07-15 14:55:24.999643] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.384 [2024-07-15 14:55:24.999662] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.384 [2024-07-15 14:55:24.999667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:20:51.384 [2024-07-15 14:55:24.999671] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x181300 00:20:51.384 [2024-07-15 14:55:24.999678] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.384 [2024-07-15 14:55:24.999683] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.384 [2024-07-15 14:55:24.999704] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.384 [2024-07-15 14:55:24.999709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:20:51.384 [2024-07-15 14:55:24.999713] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x181300 00:20:51.384 [2024-07-15 14:55:24.999720] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.384 [2024-07-15 14:55:24.999726] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.384 [2024-07-15 14:55:24.999745] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.384 [2024-07-15 14:55:24.999750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:20:51.384 [2024-07-15 14:55:24.999754] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x181300 00:20:51.384 [2024-07-15 14:55:24.999761] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.384 [2024-07-15 14:55:24.999766] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.384 [2024-07-15 14:55:24.999790] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.384 [2024-07-15 14:55:24.999795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:20:51.384 [2024-07-15 14:55:24.999799] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x181300 00:20:51.384 [2024-07-15 14:55:24.999806] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.384 [2024-07-15 14:55:24.999812] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.384 [2024-07-15 14:55:24.999835] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.384 [2024-07-15 14:55:24.999839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:20:51.384 [2024-07-15 14:55:24.999844] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x181300 00:20:51.384 [2024-07-15 14:55:24.999851] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.384 [2024-07-15 14:55:24.999857] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.384 [2024-07-15 14:55:24.999874] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.384 [2024-07-15 14:55:24.999878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:20:51.384 [2024-07-15 14:55:24.999882] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x181300 00:20:51.384 [2024-07-15 14:55:24.999889] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.384 [2024-07-15 14:55:24.999895] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.384 [2024-07-15 14:55:24.999915] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.384 [2024-07-15 14:55:24.999919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:20:51.384 [2024-07-15 14:55:24.999923] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x181300 00:20:51.384 [2024-07-15 14:55:24.999930] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.384 [2024-07-15 14:55:24.999936] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.384 [2024-07-15 14:55:24.999960] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.384 [2024-07-15 14:55:24.999964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:20:51.384 [2024-07-15 14:55:24.999968] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x181300 00:20:51.384 [2024-07-15 14:55:24.999975] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.384 [2024-07-15 14:55:24.999981] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.384 [2024-07-15 14:55:25.000000] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.384 [2024-07-15 14:55:25.000005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:20:51.384 [2024-07-15 14:55:25.000009] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x181300 00:20:51.384 [2024-07-15 14:55:25.000016] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.384 [2024-07-15 14:55:25.000022] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.384 [2024-07-15 14:55:25.000044] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.384 [2024-07-15 14:55:25.000048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:20:51.384 [2024-07-15 14:55:25.000053] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x181300 00:20:51.384 [2024-07-15 14:55:25.000059] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.384 [2024-07-15 14:55:25.000065] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.384 [2024-07-15 14:55:25.000083] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.384 [2024-07-15 14:55:25.000089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:20:51.384 [2024-07-15 14:55:25.000093] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x181300 00:20:51.384 [2024-07-15 14:55:25.000100] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.384 [2024-07-15 14:55:25.000106] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.384 [2024-07-15 14:55:25.000125] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.384 [2024-07-15 14:55:25.000129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:20:51.384 [2024-07-15 14:55:25.000133] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x181300 00:20:51.384 [2024-07-15 14:55:25.000140] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.384 [2024-07-15 14:55:25.000146] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.384 [2024-07-15 14:55:25.000164] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.384 [2024-07-15 14:55:25.000168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:20:51.384 [2024-07-15 14:55:25.000172] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x181300 00:20:51.384 [2024-07-15 14:55:25.000179] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.384 [2024-07-15 14:55:25.000185] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.384 [2024-07-15 14:55:25.000202] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.384 [2024-07-15 14:55:25.000206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:20:51.384 [2024-07-15 14:55:25.000210] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x181300 00:20:51.384 [2024-07-15 14:55:25.000217] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.384 [2024-07-15 14:55:25.000223] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.384 [2024-07-15 14:55:25.000239] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.384 [2024-07-15 14:55:25.000243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:20:51.384 [2024-07-15 14:55:25.000248] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x181300 00:20:51.384 [2024-07-15 14:55:25.000254] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.384 [2024-07-15 14:55:25.000260] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.384 [2024-07-15 14:55:25.000278] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.384 [2024-07-15 14:55:25.000282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:20:51.385 [2024-07-15 14:55:25.000287] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x181300 00:20:51.385 [2024-07-15 14:55:25.000293] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.385 [2024-07-15 14:55:25.000299] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.385 [2024-07-15 14:55:25.000314] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.385 [2024-07-15 14:55:25.000319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:20:51.385 [2024-07-15 14:55:25.000323] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x181300 00:20:51.385 [2024-07-15 14:55:25.000330] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.385 [2024-07-15 14:55:25.000336] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.385 [2024-07-15 14:55:25.000357] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.385 [2024-07-15 14:55:25.000361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:20:51.385 [2024-07-15 14:55:25.000365] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x181300 00:20:51.385 [2024-07-15 14:55:25.000372] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.385 [2024-07-15 14:55:25.000378] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.385 [2024-07-15 14:55:25.000393] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.385 [2024-07-15 14:55:25.000397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:20:51.385 [2024-07-15 14:55:25.000401] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x181300 00:20:51.385 [2024-07-15 14:55:25.000408] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.385 [2024-07-15 14:55:25.000414] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.385 [2024-07-15 14:55:25.000432] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.385 [2024-07-15 14:55:25.000436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:20:51.385 [2024-07-15 14:55:25.000440] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x181300 00:20:51.385 [2024-07-15 14:55:25.000447] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.385 [2024-07-15 14:55:25.000453] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.385 [2024-07-15 14:55:25.000471] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.385 [2024-07-15 14:55:25.000475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:20:51.385 [2024-07-15 14:55:25.000479] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x181300 00:20:51.385 [2024-07-15 14:55:25.000486] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.385 [2024-07-15 14:55:25.000492] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.385 [2024-07-15 14:55:25.000513] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.385 [2024-07-15 14:55:25.000517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:20:51.385 [2024-07-15 14:55:25.000521] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x181300 00:20:51.385 [2024-07-15 14:55:25.000528] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.385 [2024-07-15 14:55:25.000534] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.385 [2024-07-15 14:55:25.000552] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.385 [2024-07-15 14:55:25.000557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:20:51.385 [2024-07-15 14:55:25.000561] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x181300 00:20:51.385 [2024-07-15 14:55:25.000568] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.385 [2024-07-15 14:55:25.000574] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.385 [2024-07-15 14:55:25.000593] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.385 [2024-07-15 14:55:25.000598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:20:51.385 [2024-07-15 14:55:25.000602] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x181300 00:20:51.385 [2024-07-15 14:55:25.000609] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.385 [2024-07-15 14:55:25.000614] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.385 [2024-07-15 14:55:25.000632] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.385 [2024-07-15 14:55:25.000637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:20:51.385 [2024-07-15 14:55:25.000641] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x181300 00:20:51.385 [2024-07-15 14:55:25.000648] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.385 [2024-07-15 14:55:25.000653] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.385 [2024-07-15 14:55:25.000673] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.385 [2024-07-15 14:55:25.000677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:20:51.385 [2024-07-15 14:55:25.000681] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x181300 00:20:51.385 [2024-07-15 14:55:25.000688] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.385 [2024-07-15 14:55:25.000694] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.385 [2024-07-15 14:55:25.000709] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.385 [2024-07-15 14:55:25.000713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:20:51.385 [2024-07-15 14:55:25.000717] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x181300 00:20:51.385 [2024-07-15 14:55:25.000724] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.385 [2024-07-15 14:55:25.000730] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.385 [2024-07-15 14:55:25.000748] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.385 [2024-07-15 14:55:25.000752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:20:51.385 [2024-07-15 14:55:25.000756] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x181300 00:20:51.385 [2024-07-15 14:55:25.000763] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.385 [2024-07-15 14:55:25.000769] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.385 [2024-07-15 14:55:25.000793] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.385 [2024-07-15 14:55:25.000797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:20:51.385 [2024-07-15 14:55:25.000801] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x181300 00:20:51.385 [2024-07-15 14:55:25.000808] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.385 [2024-07-15 14:55:25.000814] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.385 [2024-07-15 14:55:25.000832] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.385 [2024-07-15 14:55:25.000836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:20:51.385 [2024-07-15 14:55:25.000840] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x181300 00:20:51.385 [2024-07-15 14:55:25.000847] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.385 [2024-07-15 14:55:25.000853] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.385 [2024-07-15 14:55:25.000868] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.385 [2024-07-15 14:55:25.000872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:20:51.385 [2024-07-15 14:55:25.000876] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x181300 00:20:51.385 [2024-07-15 14:55:25.000883] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.385 [2024-07-15 14:55:25.000889] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.385 [2024-07-15 14:55:25.000905] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.385 [2024-07-15 14:55:25.000909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:20:51.385 [2024-07-15 14:55:25.000914] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x181300 00:20:51.385 [2024-07-15 14:55:25.000920] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.385 [2024-07-15 14:55:25.000926] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.385 [2024-07-15 14:55:25.000947] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.385 [2024-07-15 14:55:25.000951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:20:51.385 [2024-07-15 14:55:25.000956] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x181300 00:20:51.385 [2024-07-15 14:55:25.000963] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.385 [2024-07-15 14:55:25.000968] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.385 [2024-07-15 14:55:25.000989] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.385 [2024-07-15 14:55:25.000993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:20:51.385 [2024-07-15 14:55:25.000998] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x181300 00:20:51.385 [2024-07-15 14:55:25.001004] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.385 [2024-07-15 14:55:25.001010] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.385 [2024-07-15 14:55:25.001026] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.385 [2024-07-15 14:55:25.001031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:20:51.385 [2024-07-15 14:55:25.001035] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x181300 00:20:51.385 [2024-07-15 14:55:25.001042] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.386 [2024-07-15 14:55:25.001048] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.386 [2024-07-15 14:55:25.001067] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.386 [2024-07-15 14:55:25.001071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:20:51.386 [2024-07-15 14:55:25.001075] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x181300 00:20:51.386 [2024-07-15 14:55:25.001082] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.386 [2024-07-15 14:55:25.001088] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.386 [2024-07-15 14:55:25.001111] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.386 [2024-07-15 14:55:25.001115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:20:51.386 [2024-07-15 14:55:25.001119] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x181300 00:20:51.386 [2024-07-15 14:55:25.001126] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.386 [2024-07-15 14:55:25.001131] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.386 [2024-07-15 14:55:25.001146] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.386 [2024-07-15 14:55:25.001150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:20:51.386 [2024-07-15 14:55:25.001155] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x181300 00:20:51.386 [2024-07-15 14:55:25.001162] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.386 [2024-07-15 14:55:25.001167] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.386 [2024-07-15 14:55:25.001187] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.386 [2024-07-15 14:55:25.001191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:20:51.386 [2024-07-15 14:55:25.001195] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x181300 00:20:51.386 [2024-07-15 14:55:25.001202] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.386 [2024-07-15 14:55:25.001208] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.386 [2024-07-15 14:55:25.001230] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.386 [2024-07-15 14:55:25.001234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:20:51.386 [2024-07-15 14:55:25.001239] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x181300 00:20:51.386 [2024-07-15 14:55:25.001245] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.386 [2024-07-15 14:55:25.001252] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.386 [2024-07-15 14:55:25.001269] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.386 [2024-07-15 14:55:25.001273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:20:51.386 [2024-07-15 14:55:25.001278] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x181300 00:20:51.386 [2024-07-15 14:55:25.001284] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.386 [2024-07-15 14:55:25.001290] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.386 [2024-07-15 14:55:25.001310] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.386 [2024-07-15 14:55:25.001314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:20:51.386 [2024-07-15 14:55:25.001318] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x181300 00:20:51.386 [2024-07-15 14:55:25.001325] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.386 [2024-07-15 14:55:25.001331] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.386 [2024-07-15 14:55:25.001352] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.386 [2024-07-15 14:55:25.001356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:20:51.386 [2024-07-15 14:55:25.001360] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x181300 00:20:51.386 [2024-07-15 14:55:25.001367] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.386 [2024-07-15 14:55:25.001372] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.386 [2024-07-15 14:55:25.001395] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.386 [2024-07-15 14:55:25.001399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:20:51.386 [2024-07-15 14:55:25.001404] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x181300 00:20:51.386 [2024-07-15 14:55:25.001410] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.386 [2024-07-15 14:55:25.001416] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.386 [2024-07-15 14:55:25.001431] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.386 [2024-07-15 14:55:25.001435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:20:51.386 [2024-07-15 14:55:25.001439] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x181300 00:20:51.386 [2024-07-15 14:55:25.001446] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.386 [2024-07-15 14:55:25.001452] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.386 [2024-07-15 14:55:25.001470] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.386 [2024-07-15 14:55:25.001474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:20:51.386 [2024-07-15 14:55:25.001478] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x181300 00:20:51.386 [2024-07-15 14:55:25.001485] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.386 [2024-07-15 14:55:25.001492] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.386 [2024-07-15 14:55:25.001513] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.386 [2024-07-15 14:55:25.001517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:20:51.386 [2024-07-15 14:55:25.001522] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x181300 00:20:51.386 [2024-07-15 14:55:25.001528] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.386 [2024-07-15 14:55:25.001534] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.386 [2024-07-15 14:55:25.005546] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.386 [2024-07-15 14:55:25.005553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:20:51.386 [2024-07-15 14:55:25.005557] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x181300 00:20:51.386 [2024-07-15 14:55:25.005564] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x181300 00:20:51.386 [2024-07-15 14:55:25.005570] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:51.386 [2024-07-15 14:55:25.005585] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:51.386 [2024-07-15 14:55:25.005590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:0017 p:0 m:0 dnr:0 00:20:51.386 [2024-07-15 14:55:25.005594] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x181300 00:20:51.386 [2024-07-15 14:55:25.005599] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:20:51.386 Used: 0% 00:20:51.386 Data Units Read: 0 00:20:51.386 Data Units Written: 0 00:20:51.386 Host Read Commands: 0 00:20:51.386 Host Write Commands: 0 00:20:51.386 Controller Busy Time: 0 minutes 00:20:51.386 Power Cycles: 0 00:20:51.386 Power On Hours: 0 hours 00:20:51.386 Unsafe Shutdowns: 0 00:20:51.386 Unrecoverable Media Errors: 0 00:20:51.386 Lifetime Error Log Entries: 0 00:20:51.386 Warning Temperature Time: 0 minutes 00:20:51.386 Critical Temperature Time: 0 minutes 00:20:51.386 00:20:51.386 Number of Queues 00:20:51.386 ================ 00:20:51.386 Number of I/O Submission Queues: 127 00:20:51.386 Number of I/O Completion Queues: 127 00:20:51.386 00:20:51.386 Active Namespaces 00:20:51.386 ================= 00:20:51.386 Namespace ID:1 00:20:51.386 Error Recovery Timeout: Unlimited 00:20:51.386 Command Set Identifier: NVM (00h) 00:20:51.386 Deallocate: Supported 00:20:51.386 Deallocated/Unwritten Error: Not Supported 00:20:51.386 Deallocated Read Value: Unknown 00:20:51.386 Deallocate in Write Zeroes: Not Supported 00:20:51.386 Deallocated Guard Field: 0xFFFF 00:20:51.386 Flush: Supported 00:20:51.386 Reservation: Supported 00:20:51.386 Namespace Sharing Capabilities: Multiple Controllers 00:20:51.386 Size (in LBAs): 131072 (0GiB) 00:20:51.386 Capacity (in LBAs): 131072 (0GiB) 00:20:51.386 Utilization (in LBAs): 131072 (0GiB) 00:20:51.386 NGUID: ABCDEF0123456789ABCDEF0123456789 00:20:51.386 EUI64: ABCDEF0123456789 00:20:51.386 UUID: a16908ae-2d0a-489f-8d24-6a3aa94b84db 00:20:51.386 Thin Provisioning: Not Supported 00:20:51.386 Per-NS Atomic Units: Yes 00:20:51.386 Atomic Boundary Size (Normal): 0 00:20:51.386 Atomic Boundary Size (PFail): 0 00:20:51.386 Atomic Boundary Offset: 0 00:20:51.386 Maximum Single Source Range Length: 65535 00:20:51.386 Maximum Copy Length: 65535 00:20:51.386 Maximum Source Range Count: 1 00:20:51.386 NGUID/EUI64 Never Reused: No 00:20:51.386 Namespace Write Protected: No 00:20:51.386 Number of LBA Formats: 1 00:20:51.386 Current LBA Format: LBA Format #00 00:20:51.386 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:51.386 00:20:51.386 14:55:25 nvmf_rdma.nvmf_identify -- host/identify.sh@51 -- # sync 00:20:51.386 14:55:25 nvmf_rdma.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:51.386 14:55:25 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.386 14:55:25 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:51.386 14:55:25 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.386 14:55:25 nvmf_rdma.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:20:51.386 14:55:25 nvmf_rdma.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:20:51.387 14:55:25 nvmf_rdma.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:51.387 14:55:25 nvmf_rdma.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:20:51.387 14:55:25 nvmf_rdma.nvmf_identify -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:20:51.387 14:55:25 nvmf_rdma.nvmf_identify -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:20:51.387 14:55:25 nvmf_rdma.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:20:51.387 14:55:25 nvmf_rdma.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:51.387 14:55:25 nvmf_rdma.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:20:51.387 rmmod nvme_rdma 00:20:51.387 rmmod nvme_fabrics 00:20:51.387 14:55:25 nvmf_rdma.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:51.387 14:55:25 nvmf_rdma.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:20:51.387 14:55:25 nvmf_rdma.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:20:51.387 14:55:25 nvmf_rdma.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 2904021 ']' 00:20:51.387 14:55:25 nvmf_rdma.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 2904021 00:20:51.387 14:55:25 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 2904021 ']' 00:20:51.387 14:55:25 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 2904021 00:20:51.387 14:55:25 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:20:51.387 14:55:25 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:51.387 14:55:25 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2904021 00:20:51.387 14:55:25 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:51.387 14:55:25 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:51.387 14:55:25 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2904021' 00:20:51.387 killing process with pid 2904021 00:20:51.387 14:55:25 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@967 -- # kill 2904021 00:20:51.387 14:55:25 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@972 -- # wait 2904021 00:20:51.644 14:55:25 nvmf_rdma.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:51.645 14:55:25 nvmf_rdma.nvmf_identify -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:20:51.645 00:20:51.645 real 0m7.313s 00:20:51.645 user 0m7.822s 00:20:51.645 sys 0m4.511s 00:20:51.645 14:55:25 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:51.645 14:55:25 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:51.645 ************************************ 00:20:51.645 END TEST nvmf_identify 00:20:51.645 ************************************ 00:20:51.645 14:55:25 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:20:51.645 14:55:25 nvmf_rdma -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:20:51.645 14:55:25 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:51.645 14:55:25 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:51.645 14:55:25 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:20:51.645 ************************************ 00:20:51.645 START TEST nvmf_perf 00:20:51.645 ************************************ 00:20:51.645 14:55:25 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:20:51.645 * Looking for test storage... 00:20:51.645 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:20:51.645 14:55:25 nvmf_rdma.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:51.645 14:55:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:20:51.902 14:55:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:51.902 14:55:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:51.902 14:55:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:51.902 14:55:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:51.902 14:55:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:51.902 14:55:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:51.902 14:55:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:51.902 14:55:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:51.902 14:55:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:51.902 14:55:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:51.902 14:55:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:20:51.902 14:55:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:20:51.902 14:55:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:51.902 14:55:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:51.902 14:55:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:51.902 14:55:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:51.902 14:55:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:51.902 14:55:25 nvmf_rdma.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:51.902 14:55:25 nvmf_rdma.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:51.902 14:55:25 nvmf_rdma.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:51.902 14:55:25 nvmf_rdma.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.902 14:55:25 nvmf_rdma.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.902 14:55:25 nvmf_rdma.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.902 14:55:25 nvmf_rdma.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:20:51.902 14:55:25 nvmf_rdma.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.902 14:55:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:20:51.902 14:55:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:51.902 14:55:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:51.902 14:55:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:51.902 14:55:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:51.902 14:55:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:51.902 14:55:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:51.902 14:55:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:51.902 14:55:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:51.902 14:55:25 nvmf_rdma.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:51.902 14:55:25 nvmf_rdma.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:51.902 14:55:25 nvmf_rdma.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:20:51.902 14:55:25 nvmf_rdma.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:20:51.902 14:55:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:20:51.902 14:55:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:51.902 14:55:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:51.902 14:55:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:51.902 14:55:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:51.902 14:55:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:51.902 14:55:25 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:51.902 14:55:25 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:51.902 14:55:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:51.902 14:55:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:51.902 14:55:25 nvmf_rdma.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:20:51.902 14:55:25 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:57.165 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:57.165 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:20:57.165 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:57.165 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:57.165 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:57.165 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:57.165 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:57.165 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:20:57.165 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:57.165 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:20:57.165 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:20:57.165 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:20:57.165 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:20:57.165 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:20:57.165 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:20:57.165 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:57.165 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:57.165 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:57.165 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:57.165 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:57.165 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:57.165 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:57.165 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:57.165 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:57.165 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:57.166 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:57.166 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:57.166 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:20:57.166 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:20:57.166 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:20:57.166 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:20:57.166 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:20:57.166 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:57.166 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:57.166 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:20:57.166 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:20:57.166 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:20:57.166 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:20:57.166 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:57.166 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:57.166 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:20:57.166 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:20:57.166 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:57.166 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:20:57.166 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:20:57.166 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:20:57.166 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:20:57.166 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:57.166 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:57.166 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:20:57.166 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:20:57.166 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:57.166 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:20:57.166 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:57.166 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:57.166 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:20:57.166 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:57.166 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:57.166 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:20:57.166 Found net devices under 0000:da:00.0: mlx_0_0 00:20:57.166 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:57.166 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:57.166 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:57.166 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:20:57.166 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:57.166 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:57.166 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:20:57.166 Found net devices under 0000:da:00.1: mlx_0_1 00:20:57.166 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:57.166 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:57.166 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:20:57.166 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:57.166 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:20:57.166 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:20:57.166 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@420 -- # rdma_device_init 00:20:57.166 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:20:57.166 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@58 -- # uname 00:20:57.166 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:20:57.166 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@62 -- # modprobe ib_cm 00:20:57.166 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@63 -- # modprobe ib_core 00:20:57.166 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@64 -- # modprobe ib_umad 00:20:57.166 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:20:57.166 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@66 -- # modprobe iw_cm 00:20:57.166 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:20:57.166 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:20:57.166 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@502 -- # allocate_nic_ips 00:20:57.166 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:57.166 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@73 -- # get_rdma_if_list 00:20:57.166 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:57.166 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:20:57.166 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:20:57.166 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:57.166 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:20:57.166 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:57.166 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:57.166 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:57.166 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@104 -- # echo mlx_0_0 00:20:57.166 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@105 -- # continue 2 00:20:57.166 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:57.166 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:57.166 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:57.166 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:57.166 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:57.166 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@104 -- # echo mlx_0_1 00:20:57.166 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@105 -- # continue 2 00:20:57.166 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:20:57.166 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:20:57.166 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:20:57.166 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:20:57.166 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:57.166 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:57.166 14:55:30 nvmf_rdma.nvmf_perf -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:20:57.166 14:55:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:20:57.166 14:55:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:20:57.166 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:57.166 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:20:57.166 altname enp218s0f0np0 00:20:57.166 altname ens818f0np0 00:20:57.166 inet 192.168.100.8/24 scope global mlx_0_0 00:20:57.166 valid_lft forever preferred_lft forever 00:20:57.166 14:55:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:20:57.166 14:55:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:20:57.166 14:55:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:57.166 14:55:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:57.166 14:55:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:57.166 14:55:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:57.166 14:55:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:20:57.166 14:55:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:20:57.166 14:55:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:20:57.166 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:57.166 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:20:57.166 altname enp218s0f1np1 00:20:57.166 altname ens818f1np1 00:20:57.166 inet 192.168.100.9/24 scope global mlx_0_1 00:20:57.166 valid_lft forever preferred_lft forever 00:20:57.166 14:55:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:20:57.166 14:55:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:57.166 14:55:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:57.166 14:55:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:20:57.166 14:55:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:20:57.166 14:55:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@86 -- # get_rdma_if_list 00:20:57.167 14:55:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:57.167 14:55:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:20:57.167 14:55:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:20:57.167 14:55:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:57.167 14:55:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:20:57.167 14:55:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:57.167 14:55:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:57.167 14:55:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:57.167 14:55:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@104 -- # echo mlx_0_0 00:20:57.167 14:55:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@105 -- # continue 2 00:20:57.167 14:55:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:57.167 14:55:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:57.167 14:55:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:57.167 14:55:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:57.167 14:55:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:57.167 14:55:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@104 -- # echo mlx_0_1 00:20:57.167 14:55:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@105 -- # continue 2 00:20:57.167 14:55:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:20:57.167 14:55:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:20:57.167 14:55:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:20:57.167 14:55:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:20:57.167 14:55:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:57.167 14:55:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:57.167 14:55:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:20:57.167 14:55:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:20:57.167 14:55:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:57.167 14:55:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:57.167 14:55:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:57.167 14:55:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:57.167 14:55:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:20:57.167 192.168.100.9' 00:20:57.167 14:55:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:20:57.167 192.168.100.9' 00:20:57.167 14:55:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@457 -- # head -n 1 00:20:57.167 14:55:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:57.167 14:55:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:20:57.167 192.168.100.9' 00:20:57.167 14:55:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@458 -- # tail -n +2 00:20:57.167 14:55:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@458 -- # head -n 1 00:20:57.424 14:55:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:57.424 14:55:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:20:57.424 14:55:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:57.424 14:55:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:20:57.424 14:55:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:20:57.424 14:55:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:20:57.424 14:55:31 nvmf_rdma.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:20:57.424 14:55:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:57.424 14:55:31 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:57.424 14:55:31 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:57.425 14:55:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=2907337 00:20:57.425 14:55:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 2907337 00:20:57.425 14:55:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:57.425 14:55:31 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 2907337 ']' 00:20:57.425 14:55:31 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:57.425 14:55:31 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:57.425 14:55:31 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:57.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:57.425 14:55:31 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:57.425 14:55:31 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:57.425 [2024-07-15 14:55:31.161367] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:20:57.425 [2024-07-15 14:55:31.161412] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:57.425 EAL: No free 2048 kB hugepages reported on node 1 00:20:57.425 [2024-07-15 14:55:31.220618] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:57.425 [2024-07-15 14:55:31.300278] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:57.425 [2024-07-15 14:55:31.300317] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:57.425 [2024-07-15 14:55:31.300324] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:57.425 [2024-07-15 14:55:31.300329] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:57.425 [2024-07-15 14:55:31.300334] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:57.425 [2024-07-15 14:55:31.300375] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:57.425 [2024-07-15 14:55:31.300390] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:57.425 [2024-07-15 14:55:31.300475] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:57.425 [2024-07-15 14:55:31.300476] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:58.365 14:55:31 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:58.365 14:55:31 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:20:58.365 14:55:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:58.365 14:55:31 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:58.365 14:55:31 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:58.365 14:55:32 nvmf_rdma.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:58.365 14:55:32 nvmf_rdma.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:20:58.365 14:55:32 nvmf_rdma.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:21:01.636 14:55:35 nvmf_rdma.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:21:01.636 14:55:35 nvmf_rdma.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:21:01.636 14:55:35 nvmf_rdma.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5f:00.0 00:21:01.636 14:55:35 nvmf_rdma.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:01.636 14:55:35 nvmf_rdma.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:21:01.636 14:55:35 nvmf_rdma.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5f:00.0 ']' 00:21:01.636 14:55:35 nvmf_rdma.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:21:01.636 14:55:35 nvmf_rdma.nvmf_perf -- host/perf.sh@37 -- # '[' rdma == rdma ']' 00:21:01.636 14:55:35 nvmf_rdma.nvmf_perf -- host/perf.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -c 0 00:21:01.636 [2024-07-15 14:55:35.549890] rdma.c:2731:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:21:01.907 [2024-07-15 14:55:35.569601] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x11021b0/0x110fd00) succeed. 00:21:01.907 [2024-07-15 14:55:35.578825] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x11037f0/0x118fd40) succeed. 00:21:01.907 14:55:35 nvmf_rdma.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:02.165 14:55:35 nvmf_rdma.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:02.165 14:55:35 nvmf_rdma.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:02.165 14:55:36 nvmf_rdma.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:02.165 14:55:36 nvmf_rdma.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:21:02.422 14:55:36 nvmf_rdma.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:02.679 [2024-07-15 14:55:36.427208] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:02.679 14:55:36 nvmf_rdma.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:21:02.937 14:55:36 nvmf_rdma.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5f:00.0 ']' 00:21:02.937 14:55:36 nvmf_rdma.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5f:00.0' 00:21:02.937 14:55:36 nvmf_rdma.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:21:02.937 14:55:36 nvmf_rdma.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5f:00.0' 00:21:04.310 Initializing NVMe Controllers 00:21:04.310 Attached to NVMe Controller at 0000:5f:00.0 [8086:0a54] 00:21:04.310 Associating PCIE (0000:5f:00.0) NSID 1 with lcore 0 00:21:04.310 Initialization complete. Launching workers. 00:21:04.310 ======================================================== 00:21:04.310 Latency(us) 00:21:04.310 Device Information : IOPS MiB/s Average min max 00:21:04.310 PCIE (0000:5f:00.0) NSID 1 from core 0: 99533.81 388.80 321.07 38.77 5295.50 00:21:04.310 ======================================================== 00:21:04.310 Total : 99533.81 388.80 321.07 38.77 5295.50 00:21:04.310 00:21:04.310 14:55:37 nvmf_rdma.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:21:04.310 EAL: No free 2048 kB hugepages reported on node 1 00:21:07.586 Initializing NVMe Controllers 00:21:07.586 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:21:07.586 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:07.586 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:07.586 Initialization complete. Launching workers. 00:21:07.586 ======================================================== 00:21:07.586 Latency(us) 00:21:07.586 Device Information : IOPS MiB/s Average min max 00:21:07.586 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6746.99 26.36 147.42 47.57 4090.97 00:21:07.586 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 5257.99 20.54 189.99 72.58 4099.10 00:21:07.586 ======================================================== 00:21:07.586 Total : 12004.99 46.89 166.06 47.57 4099.10 00:21:07.586 00:21:07.586 14:55:41 nvmf_rdma.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:21:07.586 EAL: No free 2048 kB hugepages reported on node 1 00:21:10.856 Initializing NVMe Controllers 00:21:10.856 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:21:10.856 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:10.856 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:10.856 Initialization complete. Launching workers. 00:21:10.856 ======================================================== 00:21:10.856 Latency(us) 00:21:10.856 Device Information : IOPS MiB/s Average min max 00:21:10.856 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18151.98 70.91 1762.81 496.00 6257.29 00:21:10.856 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4049.00 15.82 7937.66 3293.82 9559.09 00:21:10.856 ======================================================== 00:21:10.856 Total : 22200.98 86.72 2888.97 496.00 9559.09 00:21:10.856 00:21:10.856 14:55:44 nvmf_rdma.nvmf_perf -- host/perf.sh@59 -- # [[ mlx5 == \e\8\1\0 ]] 00:21:10.856 14:55:44 nvmf_rdma.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:21:10.856 EAL: No free 2048 kB hugepages reported on node 1 00:21:15.038 Initializing NVMe Controllers 00:21:15.038 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:21:15.038 Controller IO queue size 128, less than required. 00:21:15.038 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:15.038 Controller IO queue size 128, less than required. 00:21:15.038 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:15.038 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:15.038 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:15.038 Initialization complete. Launching workers. 00:21:15.038 ======================================================== 00:21:15.038 Latency(us) 00:21:15.038 Device Information : IOPS MiB/s Average min max 00:21:15.038 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3902.00 975.50 33000.16 14531.78 74029.10 00:21:15.038 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4051.00 1012.75 31311.51 14104.04 48371.41 00:21:15.038 ======================================================== 00:21:15.038 Total : 7953.00 1988.25 32140.02 14104.04 74029.10 00:21:15.038 00:21:15.038 14:55:48 nvmf_rdma.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0xf -P 4 00:21:15.295 EAL: No free 2048 kB hugepages reported on node 1 00:21:15.554 No valid NVMe controllers or AIO or URING devices found 00:21:15.554 Initializing NVMe Controllers 00:21:15.554 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:21:15.554 Controller IO queue size 128, less than required. 00:21:15.554 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:15.554 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:21:15.554 Controller IO queue size 128, less than required. 00:21:15.554 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:15.554 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:21:15.554 WARNING: Some requested NVMe devices were skipped 00:21:15.554 14:55:49 nvmf_rdma.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' --transport-stat 00:21:15.554 EAL: No free 2048 kB hugepages reported on node 1 00:21:19.733 Initializing NVMe Controllers 00:21:19.733 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:21:19.733 Controller IO queue size 128, less than required. 00:21:19.733 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:19.733 Controller IO queue size 128, less than required. 00:21:19.734 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:19.734 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:19.734 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:19.734 Initialization complete. Launching workers. 00:21:19.734 00:21:19.734 ==================== 00:21:19.734 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:21:19.734 RDMA transport: 00:21:19.734 dev name: mlx5_0 00:21:19.734 polls: 401056 00:21:19.734 idle_polls: 397757 00:21:19.734 completions: 43398 00:21:19.734 queued_requests: 1 00:21:19.734 total_send_wrs: 21699 00:21:19.734 send_doorbell_updates: 3068 00:21:19.734 total_recv_wrs: 21826 00:21:19.734 recv_doorbell_updates: 3069 00:21:19.734 --------------------------------- 00:21:19.734 00:21:19.734 ==================== 00:21:19.734 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:21:19.734 RDMA transport: 00:21:19.734 dev name: mlx5_0 00:21:19.734 polls: 408719 00:21:19.734 idle_polls: 408442 00:21:19.734 completions: 20266 00:21:19.734 queued_requests: 1 00:21:19.734 total_send_wrs: 10133 00:21:19.734 send_doorbell_updates: 255 00:21:19.734 total_recv_wrs: 10260 00:21:19.734 recv_doorbell_updates: 258 00:21:19.734 --------------------------------- 00:21:19.734 ======================================================== 00:21:19.734 Latency(us) 00:21:19.734 Device Information : IOPS MiB/s Average min max 00:21:19.734 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5424.50 1356.12 23708.44 10996.95 56538.46 00:21:19.734 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2533.00 633.25 50555.52 30072.09 79660.70 00:21:19.734 ======================================================== 00:21:19.734 Total : 7957.50 1989.38 32254.30 10996.95 79660.70 00:21:19.734 00:21:19.992 14:55:53 nvmf_rdma.nvmf_perf -- host/perf.sh@66 -- # sync 00:21:19.992 14:55:53 nvmf_rdma.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:19.992 14:55:53 nvmf_rdma.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:21:19.992 14:55:53 nvmf_rdma.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:21:19.992 14:55:53 nvmf_rdma.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:21:19.992 14:55:53 nvmf_rdma.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:19.992 14:55:53 nvmf_rdma.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:21:19.992 14:55:53 nvmf_rdma.nvmf_perf -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:21:19.992 14:55:53 nvmf_rdma.nvmf_perf -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:21:19.992 14:55:53 nvmf_rdma.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:21:19.992 14:55:53 nvmf_rdma.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:19.992 14:55:53 nvmf_rdma.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:21:19.992 rmmod nvme_rdma 00:21:19.992 rmmod nvme_fabrics 00:21:20.248 14:55:53 nvmf_rdma.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:20.249 14:55:53 nvmf_rdma.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:21:20.249 14:55:53 nvmf_rdma.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:21:20.249 14:55:53 nvmf_rdma.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 2907337 ']' 00:21:20.249 14:55:53 nvmf_rdma.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 2907337 00:21:20.249 14:55:53 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 2907337 ']' 00:21:20.249 14:55:53 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 2907337 00:21:20.249 14:55:53 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:21:20.249 14:55:53 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:20.249 14:55:53 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2907337 00:21:20.249 14:55:53 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:20.249 14:55:53 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:20.249 14:55:53 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2907337' 00:21:20.249 killing process with pid 2907337 00:21:20.249 14:55:53 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@967 -- # kill 2907337 00:21:20.249 14:55:53 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@972 -- # wait 2907337 00:21:22.774 14:55:56 nvmf_rdma.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:22.774 14:55:56 nvmf_rdma.nvmf_perf -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:21:22.774 00:21:22.774 real 0m30.627s 00:21:22.774 user 1m41.169s 00:21:22.774 sys 0m5.263s 00:21:22.774 14:55:56 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:22.774 14:55:56 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:22.774 ************************************ 00:21:22.774 END TEST nvmf_perf 00:21:22.774 ************************************ 00:21:22.774 14:55:56 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:21:22.774 14:55:56 nvmf_rdma -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:21:22.774 14:55:56 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:22.774 14:55:56 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:22.774 14:55:56 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:21:22.774 ************************************ 00:21:22.774 START TEST nvmf_fio_host 00:21:22.774 ************************************ 00:21:22.774 14:55:56 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:21:22.774 * Looking for test storage... 00:21:22.774 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:21:22.774 14:55:56 nvmf_rdma.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:22.774 14:55:56 nvmf_rdma.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:22.774 14:55:56 nvmf_rdma.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:22.774 14:55:56 nvmf_rdma.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:22.774 14:55:56 nvmf_rdma.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.774 14:55:56 nvmf_rdma.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.774 14:55:56 nvmf_rdma.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.774 14:55:56 nvmf_rdma.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:21:22.774 14:55:56 nvmf_rdma.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.774 14:55:56 nvmf_rdma.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:22.774 14:55:56 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:21:22.774 14:55:56 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:22.774 14:55:56 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:22.774 14:55:56 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:22.774 14:55:56 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:22.774 14:55:56 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:22.774 14:55:56 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:22.774 14:55:56 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:22.774 14:55:56 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:22.774 14:55:56 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:22.774 14:55:56 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:22.774 14:55:56 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:21:22.774 14:55:56 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:21:22.774 14:55:56 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:22.774 14:55:56 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:22.774 14:55:56 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:22.774 14:55:56 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:22.774 14:55:56 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:22.774 14:55:56 nvmf_rdma.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:22.774 14:55:56 nvmf_rdma.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:22.775 14:55:56 nvmf_rdma.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:22.775 14:55:56 nvmf_rdma.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.775 14:55:56 nvmf_rdma.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.775 14:55:56 nvmf_rdma.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.775 14:55:56 nvmf_rdma.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:21:22.775 14:55:56 nvmf_rdma.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.775 14:55:56 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:21:22.775 14:55:56 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:22.775 14:55:56 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:22.775 14:55:56 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:22.775 14:55:56 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:22.775 14:55:56 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:22.775 14:55:56 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:22.775 14:55:56 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:22.775 14:55:56 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:22.775 14:55:56 nvmf_rdma.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:21:22.775 14:55:56 nvmf_rdma.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:21:22.775 14:55:56 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:21:22.775 14:55:56 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:22.775 14:55:56 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:22.775 14:55:56 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:22.775 14:55:56 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:22.775 14:55:56 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:22.775 14:55:56 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:22.775 14:55:56 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:22.775 14:55:56 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:22.775 14:55:56 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:22.775 14:55:56 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:21:22.775 14:55:56 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:21:28.035 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:21:28.035 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:21:28.035 Found net devices under 0000:da:00.0: mlx_0_0 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:21:28.035 Found net devices under 0000:da:00.1: mlx_0_1 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@420 -- # rdma_device_init 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@58 -- # uname 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@62 -- # modprobe ib_cm 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@63 -- # modprobe ib_core 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@64 -- # modprobe ib_umad 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@66 -- # modprobe iw_cm 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@502 -- # allocate_nic_ips 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@73 -- # get_rdma_if_list 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@105 -- # continue 2 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@105 -- # continue 2 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:21:28.035 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:28.035 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:21:28.035 altname enp218s0f0np0 00:21:28.035 altname ens818f0np0 00:21:28.035 inet 192.168.100.8/24 scope global mlx_0_0 00:21:28.035 valid_lft forever preferred_lft forever 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:21:28.035 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:21:28.035 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:28.035 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:21:28.035 altname enp218s0f1np1 00:21:28.035 altname ens818f1np1 00:21:28.035 inet 192.168.100.9/24 scope global mlx_0_1 00:21:28.035 valid_lft forever preferred_lft forever 00:21:28.036 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:21:28.036 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:28.036 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:28.036 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:21:28.036 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:21:28.036 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@86 -- # get_rdma_if_list 00:21:28.036 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:28.036 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:28.036 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:28.036 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:28.036 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:28.036 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:28.036 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:28.036 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:28.036 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:28.036 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@105 -- # continue 2 00:21:28.036 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:28.036 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:28.036 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:28.036 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:28.036 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:28.036 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:28.036 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@105 -- # continue 2 00:21:28.036 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:28.036 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:21:28.036 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:28.036 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:28.036 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:28.036 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:28.036 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:28.036 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:21:28.036 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:28.036 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:28.036 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:28.036 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:28.036 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:21:28.036 192.168.100.9' 00:21:28.036 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:21:28.036 192.168.100.9' 00:21:28.036 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@457 -- # head -n 1 00:21:28.036 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:28.036 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:21:28.036 192.168.100.9' 00:21:28.036 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@458 -- # tail -n +2 00:21:28.036 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@458 -- # head -n 1 00:21:28.036 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:28.036 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:21:28.036 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:28.036 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:21:28.036 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:21:28.036 14:56:01 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:21:28.036 14:56:01 nvmf_rdma.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:21:28.036 14:56:01 nvmf_rdma.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:21:28.036 14:56:01 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:28.036 14:56:01 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:28.036 14:56:01 nvmf_rdma.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2914564 00:21:28.036 14:56:01 nvmf_rdma.nvmf_fio_host -- host/fio.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:28.036 14:56:01 nvmf_rdma.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:28.036 14:56:01 nvmf_rdma.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2914564 00:21:28.036 14:56:01 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 2914564 ']' 00:21:28.036 14:56:01 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:28.036 14:56:01 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:28.036 14:56:01 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:28.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:28.036 14:56:01 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:28.036 14:56:01 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:28.036 [2024-07-15 14:56:01.760402] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:21:28.036 [2024-07-15 14:56:01.760451] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:28.036 EAL: No free 2048 kB hugepages reported on node 1 00:21:28.036 [2024-07-15 14:56:01.815765] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:28.036 [2024-07-15 14:56:01.896474] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:28.036 [2024-07-15 14:56:01.896510] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:28.036 [2024-07-15 14:56:01.896517] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:28.036 [2024-07-15 14:56:01.896523] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:28.036 [2024-07-15 14:56:01.896528] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:28.036 [2024-07-15 14:56:01.896574] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:28.036 [2024-07-15 14:56:01.896671] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:28.036 [2024-07-15 14:56:01.896760] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:28.036 [2024-07-15 14:56:01.896762] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:28.967 14:56:02 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:28.967 14:56:02 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:21:28.967 14:56:02 nvmf_rdma.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:21:28.967 [2024-07-15 14:56:02.750282] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1a9dcc0/0x1aa21b0) succeed. 00:21:28.967 [2024-07-15 14:56:02.759367] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1a9f300/0x1ae3840) succeed. 00:21:29.224 14:56:02 nvmf_rdma.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:21:29.224 14:56:02 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:29.224 14:56:02 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:29.224 14:56:02 nvmf_rdma.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:21:29.224 Malloc1 00:21:29.224 14:56:03 nvmf_rdma.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:29.491 14:56:03 nvmf_rdma.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:29.749 14:56:03 nvmf_rdma.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:29.749 [2024-07-15 14:56:03.650151] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:30.006 14:56:03 nvmf_rdma.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:21:30.006 14:56:03 nvmf_rdma.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:21:30.006 14:56:03 nvmf_rdma.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:21:30.006 14:56:03 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:21:30.006 14:56:03 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:30.006 14:56:03 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:30.006 14:56:03 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:30.006 14:56:03 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:21:30.006 14:56:03 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:21:30.006 14:56:03 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:30.006 14:56:03 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:30.007 14:56:03 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:21:30.007 14:56:03 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:21:30.007 14:56:03 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:30.007 14:56:03 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:30.007 14:56:03 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:30.007 14:56:03 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:30.007 14:56:03 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:21:30.007 14:56:03 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:21:30.007 14:56:03 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:30.007 14:56:03 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:30.007 14:56:03 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:30.007 14:56:03 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:21:30.007 14:56:03 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:21:30.264 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:30.264 fio-3.35 00:21:30.264 Starting 1 thread 00:21:30.264 EAL: No free 2048 kB hugepages reported on node 1 00:21:32.810 00:21:32.810 test: (groupid=0, jobs=1): err= 0: pid=2915063: Mon Jul 15 14:56:06 2024 00:21:32.810 read: IOPS=17.6k, BW=68.9MiB/s (72.2MB/s)(138MiB/2004msec) 00:21:32.810 slat (nsec): min=1395, max=34585, avg=1544.25, stdev=478.65 00:21:32.810 clat (usec): min=2086, max=6888, avg=3603.32, stdev=82.02 00:21:32.810 lat (usec): min=2108, max=6889, avg=3604.86, stdev=81.94 00:21:32.810 clat percentiles (usec): 00:21:32.810 | 1.00th=[ 3556], 5.00th=[ 3589], 10.00th=[ 3589], 20.00th=[ 3589], 00:21:32.810 | 30.00th=[ 3589], 40.00th=[ 3589], 50.00th=[ 3589], 60.00th=[ 3621], 00:21:32.810 | 70.00th=[ 3621], 80.00th=[ 3621], 90.00th=[ 3621], 95.00th=[ 3621], 00:21:32.810 | 99.00th=[ 3654], 99.50th=[ 3687], 99.90th=[ 4686], 99.95th=[ 5604], 00:21:32.810 | 99.99th=[ 6521] 00:21:32.810 bw ( KiB/s): min=69160, max=71056, per=100.00%, avg=70548.00, stdev=927.18, samples=4 00:21:32.810 iops : min=17290, max=17764, avg=17637.00, stdev=231.80, samples=4 00:21:32.810 write: IOPS=17.6k, BW=68.9MiB/s (72.2MB/s)(138MiB/2004msec); 0 zone resets 00:21:32.810 slat (nsec): min=1437, max=25388, avg=1633.96, stdev=471.97 00:21:32.810 clat (usec): min=2120, max=6876, avg=3602.00, stdev=82.82 00:21:32.810 lat (usec): min=2131, max=6877, avg=3603.63, stdev=82.75 00:21:32.810 clat percentiles (usec): 00:21:32.810 | 1.00th=[ 3556], 5.00th=[ 3589], 10.00th=[ 3589], 20.00th=[ 3589], 00:21:32.810 | 30.00th=[ 3589], 40.00th=[ 3589], 50.00th=[ 3589], 60.00th=[ 3589], 00:21:32.810 | 70.00th=[ 3621], 80.00th=[ 3621], 90.00th=[ 3621], 95.00th=[ 3621], 00:21:32.810 | 99.00th=[ 3654], 99.50th=[ 3720], 99.90th=[ 4686], 99.95th=[ 5997], 00:21:32.810 | 99.99th=[ 6521] 00:21:32.810 bw ( KiB/s): min=69144, max=71040, per=100.00%, avg=70562.00, stdev=945.36, samples=4 00:21:32.810 iops : min=17286, max=17760, avg=17640.50, stdev=236.34, samples=4 00:21:32.810 lat (msec) : 4=99.86%, 10=0.14% 00:21:32.810 cpu : usr=99.65%, sys=0.00%, ctx=17, majf=0, minf=4 00:21:32.810 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:21:32.810 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:32.810 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:32.810 issued rwts: total=35341,35341,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:32.810 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:32.810 00:21:32.810 Run status group 0 (all jobs): 00:21:32.810 READ: bw=68.9MiB/s (72.2MB/s), 68.9MiB/s-68.9MiB/s (72.2MB/s-72.2MB/s), io=138MiB (145MB), run=2004-2004msec 00:21:32.810 WRITE: bw=68.9MiB/s (72.2MB/s), 68.9MiB/s-68.9MiB/s (72.2MB/s-72.2MB/s), io=138MiB (145MB), run=2004-2004msec 00:21:32.810 14:56:06 nvmf_rdma.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:21:32.810 14:56:06 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:21:32.810 14:56:06 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:32.810 14:56:06 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:32.810 14:56:06 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:32.810 14:56:06 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:21:32.810 14:56:06 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:21:32.810 14:56:06 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:32.810 14:56:06 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:32.810 14:56:06 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:21:32.810 14:56:06 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:21:32.810 14:56:06 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:32.810 14:56:06 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:32.810 14:56:06 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:32.810 14:56:06 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:32.810 14:56:06 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:21:32.810 14:56:06 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:21:32.810 14:56:06 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:32.810 14:56:06 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:32.810 14:56:06 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:32.810 14:56:06 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:21:32.810 14:56:06 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:21:33.074 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:21:33.074 fio-3.35 00:21:33.074 Starting 1 thread 00:21:33.074 EAL: No free 2048 kB hugepages reported on node 1 00:21:35.607 00:21:35.607 test: (groupid=0, jobs=1): err= 0: pid=2915538: Mon Jul 15 14:56:09 2024 00:21:35.607 read: IOPS=14.2k, BW=222MiB/s (233MB/s)(437MiB/1971msec) 00:21:35.607 slat (nsec): min=2302, max=40239, avg=2650.56, stdev=980.98 00:21:35.607 clat (usec): min=475, max=7880, avg=1692.25, stdev=1357.41 00:21:35.607 lat (usec): min=478, max=7895, avg=1694.90, stdev=1357.69 00:21:35.607 clat percentiles (usec): 00:21:35.607 | 1.00th=[ 701], 5.00th=[ 791], 10.00th=[ 848], 20.00th=[ 930], 00:21:35.607 | 30.00th=[ 1004], 40.00th=[ 1090], 50.00th=[ 1205], 60.00th=[ 1336], 00:21:35.607 | 70.00th=[ 1467], 80.00th=[ 1663], 90.00th=[ 4948], 95.00th=[ 5014], 00:21:35.608 | 99.00th=[ 6390], 99.50th=[ 6915], 99.90th=[ 7504], 99.95th=[ 7570], 00:21:35.608 | 99.99th=[ 7898] 00:21:35.608 bw ( KiB/s): min=108384, max=114944, per=49.01%, avg=111384.00, stdev=3094.07, samples=4 00:21:35.608 iops : min= 6774, max= 7184, avg=6961.50, stdev=193.38, samples=4 00:21:35.608 write: IOPS=8143, BW=127MiB/s (133MB/s)(227MiB/1785msec); 0 zone resets 00:21:35.608 slat (usec): min=27, max=100, avg=29.88, stdev= 5.23 00:21:35.608 clat (usec): min=4296, max=19596, avg=12728.01, stdev=1862.37 00:21:35.608 lat (usec): min=4325, max=19623, avg=12757.90, stdev=1861.83 00:21:35.608 clat percentiles (usec): 00:21:35.608 | 1.00th=[ 7963], 5.00th=[10028], 10.00th=[10552], 20.00th=[11338], 00:21:35.608 | 30.00th=[11863], 40.00th=[12256], 50.00th=[12649], 60.00th=[13042], 00:21:35.608 | 70.00th=[13566], 80.00th=[14222], 90.00th=[15139], 95.00th=[15795], 00:21:35.608 | 99.00th=[17433], 99.50th=[18220], 99.90th=[18744], 99.95th=[19268], 00:21:35.608 | 99.99th=[19530] 00:21:35.608 bw ( KiB/s): min=110336, max=119296, per=88.56%, avg=115400.00, stdev=3897.03, samples=4 00:21:35.608 iops : min= 6896, max= 7456, avg=7212.50, stdev=243.56, samples=4 00:21:35.608 lat (usec) : 500=0.01%, 750=1.70%, 1000=17.91% 00:21:35.608 lat (msec) : 2=36.75%, 4=2.00%, 10=9.26%, 20=32.38% 00:21:35.608 cpu : usr=97.26%, sys=1.15%, ctx=183, majf=0, minf=3 00:21:35.608 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:21:35.608 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:35.608 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:35.608 issued rwts: total=27995,14537,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:35.608 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:35.608 00:21:35.608 Run status group 0 (all jobs): 00:21:35.608 READ: bw=222MiB/s (233MB/s), 222MiB/s-222MiB/s (233MB/s-233MB/s), io=437MiB (459MB), run=1971-1971msec 00:21:35.608 WRITE: bw=127MiB/s (133MB/s), 127MiB/s-127MiB/s (133MB/s-133MB/s), io=227MiB (238MB), run=1785-1785msec 00:21:35.608 14:56:09 nvmf_rdma.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:35.608 14:56:09 nvmf_rdma.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:21:35.608 14:56:09 nvmf_rdma.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:35.608 14:56:09 nvmf_rdma.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:21:35.608 14:56:09 nvmf_rdma.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:21:35.608 14:56:09 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:35.608 14:56:09 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:21:35.608 14:56:09 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:21:35.608 14:56:09 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:21:35.608 14:56:09 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:21:35.608 14:56:09 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:35.608 14:56:09 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:21:35.608 rmmod nvme_rdma 00:21:35.608 rmmod nvme_fabrics 00:21:35.608 14:56:09 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:35.608 14:56:09 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:21:35.608 14:56:09 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:21:35.608 14:56:09 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 2914564 ']' 00:21:35.608 14:56:09 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 2914564 00:21:35.608 14:56:09 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 2914564 ']' 00:21:35.608 14:56:09 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 2914564 00:21:35.608 14:56:09 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:21:35.608 14:56:09 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:35.608 14:56:09 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2914564 00:21:35.608 14:56:09 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:35.608 14:56:09 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:35.608 14:56:09 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2914564' 00:21:35.608 killing process with pid 2914564 00:21:35.608 14:56:09 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 2914564 00:21:35.608 14:56:09 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 2914564 00:21:35.867 14:56:09 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:35.867 14:56:09 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:21:35.867 00:21:35.867 real 0m13.495s 00:21:35.867 user 0m48.939s 00:21:35.867 sys 0m4.943s 00:21:35.867 14:56:09 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:35.867 14:56:09 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:35.867 ************************************ 00:21:35.867 END TEST nvmf_fio_host 00:21:35.867 ************************************ 00:21:35.867 14:56:09 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:21:35.867 14:56:09 nvmf_rdma -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:21:35.867 14:56:09 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:35.867 14:56:09 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:35.867 14:56:09 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:21:35.867 ************************************ 00:21:35.867 START TEST nvmf_failover 00:21:35.867 ************************************ 00:21:35.867 14:56:09 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:21:36.125 * Looking for test storage... 00:21:36.125 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:21:36.125 14:56:09 nvmf_rdma.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:36.125 14:56:09 nvmf_rdma.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:21:36.125 14:56:09 nvmf_rdma.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:36.125 14:56:09 nvmf_rdma.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:36.125 14:56:09 nvmf_rdma.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:36.125 14:56:09 nvmf_rdma.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:36.125 14:56:09 nvmf_rdma.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:36.125 14:56:09 nvmf_rdma.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:36.125 14:56:09 nvmf_rdma.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:36.125 14:56:09 nvmf_rdma.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:36.125 14:56:09 nvmf_rdma.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:36.125 14:56:09 nvmf_rdma.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:36.125 14:56:09 nvmf_rdma.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:21:36.125 14:56:09 nvmf_rdma.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:21:36.125 14:56:09 nvmf_rdma.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:36.125 14:56:09 nvmf_rdma.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:36.125 14:56:09 nvmf_rdma.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:36.125 14:56:09 nvmf_rdma.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:36.125 14:56:09 nvmf_rdma.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:36.125 14:56:09 nvmf_rdma.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:36.125 14:56:09 nvmf_rdma.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:36.125 14:56:09 nvmf_rdma.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:36.125 14:56:09 nvmf_rdma.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.125 14:56:09 nvmf_rdma.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.126 14:56:09 nvmf_rdma.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.126 14:56:09 nvmf_rdma.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:21:36.126 14:56:09 nvmf_rdma.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.126 14:56:09 nvmf_rdma.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:21:36.126 14:56:09 nvmf_rdma.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:36.126 14:56:09 nvmf_rdma.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:36.126 14:56:09 nvmf_rdma.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:36.126 14:56:09 nvmf_rdma.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:36.126 14:56:09 nvmf_rdma.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:36.126 14:56:09 nvmf_rdma.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:36.126 14:56:09 nvmf_rdma.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:36.126 14:56:09 nvmf_rdma.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:36.126 14:56:09 nvmf_rdma.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:36.126 14:56:09 nvmf_rdma.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:36.126 14:56:09 nvmf_rdma.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:21:36.126 14:56:09 nvmf_rdma.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:36.126 14:56:09 nvmf_rdma.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:21:36.126 14:56:09 nvmf_rdma.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:21:36.126 14:56:09 nvmf_rdma.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:36.126 14:56:09 nvmf_rdma.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:36.126 14:56:09 nvmf_rdma.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:36.126 14:56:09 nvmf_rdma.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:36.126 14:56:09 nvmf_rdma.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:36.126 14:56:09 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:36.126 14:56:09 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:36.126 14:56:09 nvmf_rdma.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:36.126 14:56:09 nvmf_rdma.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:36.126 14:56:09 nvmf_rdma.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:21:36.126 14:56:09 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:41.417 14:56:14 nvmf_rdma.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:41.417 14:56:14 nvmf_rdma.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:21:41.417 14:56:14 nvmf_rdma.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:41.417 14:56:14 nvmf_rdma.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:41.417 14:56:14 nvmf_rdma.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:41.417 14:56:14 nvmf_rdma.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:41.417 14:56:14 nvmf_rdma.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:41.417 14:56:14 nvmf_rdma.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:21:41.417 14:56:14 nvmf_rdma.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:41.417 14:56:14 nvmf_rdma.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:21:41.417 14:56:14 nvmf_rdma.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:21:41.417 14:56:14 nvmf_rdma.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:21:41.417 14:56:14 nvmf_rdma.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:21:41.417 14:56:14 nvmf_rdma.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:21:41.417 14:56:14 nvmf_rdma.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:21:41.417 14:56:14 nvmf_rdma.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:41.417 14:56:14 nvmf_rdma.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:41.417 14:56:14 nvmf_rdma.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:41.417 14:56:14 nvmf_rdma.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:41.417 14:56:14 nvmf_rdma.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:41.417 14:56:14 nvmf_rdma.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:41.417 14:56:14 nvmf_rdma.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:41.417 14:56:14 nvmf_rdma.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:41.417 14:56:14 nvmf_rdma.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:41.417 14:56:14 nvmf_rdma.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:41.417 14:56:14 nvmf_rdma.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:41.417 14:56:14 nvmf_rdma.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:41.417 14:56:14 nvmf_rdma.nvmf_failover -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:21:41.417 14:56:14 nvmf_rdma.nvmf_failover -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:21:41.417 14:56:14 nvmf_rdma.nvmf_failover -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:21:41.417 14:56:14 nvmf_rdma.nvmf_failover -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:21:41.417 14:56:14 nvmf_rdma.nvmf_failover -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:21:41.417 14:56:14 nvmf_rdma.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:41.417 14:56:14 nvmf_rdma.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:41.417 14:56:14 nvmf_rdma.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:21:41.417 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:21:41.417 14:56:14 nvmf_rdma.nvmf_failover -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:41.417 14:56:14 nvmf_rdma.nvmf_failover -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:41.417 14:56:14 nvmf_rdma.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:41.417 14:56:14 nvmf_rdma.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:41.417 14:56:14 nvmf_rdma.nvmf_failover -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:21:41.417 14:56:14 nvmf_rdma.nvmf_failover -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:41.417 14:56:14 nvmf_rdma.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:41.417 14:56:14 nvmf_rdma.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:21:41.417 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:21:41.417 14:56:14 nvmf_rdma.nvmf_failover -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:41.417 14:56:14 nvmf_rdma.nvmf_failover -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:41.417 14:56:14 nvmf_rdma.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:41.417 14:56:14 nvmf_rdma.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:41.417 14:56:14 nvmf_rdma.nvmf_failover -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:21:41.417 14:56:14 nvmf_rdma.nvmf_failover -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:41.417 14:56:14 nvmf_rdma.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:41.417 14:56:14 nvmf_rdma.nvmf_failover -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:21:41.417 14:56:14 nvmf_rdma.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:41.417 14:56:14 nvmf_rdma.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:41.417 14:56:14 nvmf_rdma.nvmf_failover -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:21:41.417 14:56:14 nvmf_rdma.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:41.417 14:56:14 nvmf_rdma.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:41.417 14:56:14 nvmf_rdma.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:21:41.417 Found net devices under 0000:da:00.0: mlx_0_0 00:21:41.417 14:56:14 nvmf_rdma.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:41.417 14:56:14 nvmf_rdma.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:41.417 14:56:14 nvmf_rdma.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:41.417 14:56:14 nvmf_rdma.nvmf_failover -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:21:41.417 14:56:14 nvmf_rdma.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:41.417 14:56:14 nvmf_rdma.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:41.417 14:56:14 nvmf_rdma.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:21:41.417 Found net devices under 0000:da:00.1: mlx_0_1 00:21:41.417 14:56:14 nvmf_rdma.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:41.417 14:56:14 nvmf_rdma.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:41.417 14:56:14 nvmf_rdma.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:21:41.417 14:56:14 nvmf_rdma.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:41.417 14:56:14 nvmf_rdma.nvmf_failover -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:21:41.417 14:56:14 nvmf_rdma.nvmf_failover -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:21:41.417 14:56:14 nvmf_rdma.nvmf_failover -- nvmf/common.sh@420 -- # rdma_device_init 00:21:41.417 14:56:14 nvmf_rdma.nvmf_failover -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:21:41.417 14:56:14 nvmf_rdma.nvmf_failover -- nvmf/common.sh@58 -- # uname 00:21:41.417 14:56:14 nvmf_rdma.nvmf_failover -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:21:41.417 14:56:14 nvmf_rdma.nvmf_failover -- nvmf/common.sh@62 -- # modprobe ib_cm 00:21:41.417 14:56:14 nvmf_rdma.nvmf_failover -- nvmf/common.sh@63 -- # modprobe ib_core 00:21:41.417 14:56:14 nvmf_rdma.nvmf_failover -- nvmf/common.sh@64 -- # modprobe ib_umad 00:21:41.417 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:21:41.417 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@66 -- # modprobe iw_cm 00:21:41.417 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:21:41.417 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:21:41.417 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@502 -- # allocate_nic_ips 00:21:41.417 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:41.417 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@73 -- # get_rdma_if_list 00:21:41.417 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:41.417 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:41.418 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:41.418 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:41.418 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:41.418 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:41.418 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:41.418 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:41.418 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:41.418 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@105 -- # continue 2 00:21:41.418 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:41.418 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:41.418 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:41.418 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:41.418 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:41.418 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:41.418 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@105 -- # continue 2 00:21:41.418 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:41.418 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:21:41.418 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:41.418 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:41.418 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:41.418 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:41.418 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:21:41.418 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:21:41.418 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:21:41.418 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:41.418 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:21:41.418 altname enp218s0f0np0 00:21:41.418 altname ens818f0np0 00:21:41.418 inet 192.168.100.8/24 scope global mlx_0_0 00:21:41.418 valid_lft forever preferred_lft forever 00:21:41.418 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:41.418 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:21:41.418 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:41.418 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:41.418 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:41.418 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:41.418 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:21:41.418 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:21:41.418 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:21:41.418 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:41.418 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:21:41.418 altname enp218s0f1np1 00:21:41.418 altname ens818f1np1 00:21:41.418 inet 192.168.100.9/24 scope global mlx_0_1 00:21:41.418 valid_lft forever preferred_lft forever 00:21:41.418 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:21:41.418 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:41.418 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:41.418 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:21:41.418 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:21:41.418 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@86 -- # get_rdma_if_list 00:21:41.418 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:41.418 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:41.418 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:41.418 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:41.418 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:41.418 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:41.418 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:41.418 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:41.418 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:41.418 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@105 -- # continue 2 00:21:41.418 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:41.418 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:41.418 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:41.418 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:41.418 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:41.418 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:41.418 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@105 -- # continue 2 00:21:41.418 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:41.418 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:21:41.418 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:41.418 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:41.418 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:41.418 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:41.418 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:41.418 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:21:41.418 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:41.418 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:41.418 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:41.418 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:41.418 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:21:41.418 192.168.100.9' 00:21:41.418 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:21:41.418 192.168.100.9' 00:21:41.418 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@457 -- # head -n 1 00:21:41.418 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:41.418 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:21:41.418 192.168.100.9' 00:21:41.418 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@458 -- # tail -n +2 00:21:41.418 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@458 -- # head -n 1 00:21:41.418 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:41.418 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:21:41.418 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:41.418 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:21:41.418 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:21:41.418 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:21:41.418 14:56:15 nvmf_rdma.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:21:41.418 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:41.418 14:56:15 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:41.418 14:56:15 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:41.418 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=2919033 00:21:41.418 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 2919033 00:21:41.419 14:56:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:41.419 14:56:15 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 2919033 ']' 00:21:41.419 14:56:15 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:41.419 14:56:15 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:41.419 14:56:15 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:41.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:41.419 14:56:15 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:41.419 14:56:15 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:41.419 [2024-07-15 14:56:15.224734] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:21:41.419 [2024-07-15 14:56:15.224789] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:41.419 EAL: No free 2048 kB hugepages reported on node 1 00:21:41.419 [2024-07-15 14:56:15.280199] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:41.709 [2024-07-15 14:56:15.358414] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:41.709 [2024-07-15 14:56:15.358451] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:41.709 [2024-07-15 14:56:15.358457] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:41.709 [2024-07-15 14:56:15.358463] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:41.709 [2024-07-15 14:56:15.358467] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:41.709 [2024-07-15 14:56:15.358523] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:41.709 [2024-07-15 14:56:15.358613] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:41.709 [2024-07-15 14:56:15.358614] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:42.321 14:56:16 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:42.321 14:56:16 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:21:42.321 14:56:16 nvmf_rdma.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:42.321 14:56:16 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:42.321 14:56:16 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:42.321 14:56:16 nvmf_rdma.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:42.321 14:56:16 nvmf_rdma.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:21:42.321 [2024-07-15 14:56:16.235088] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x633200/0x6376f0) succeed. 00:21:42.580 [2024-07-15 14:56:16.244153] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6347a0/0x678d80) succeed. 00:21:42.580 14:56:16 nvmf_rdma.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:42.839 Malloc0 00:21:42.839 14:56:16 nvmf_rdma.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:42.839 14:56:16 nvmf_rdma.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:43.097 14:56:16 nvmf_rdma.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:43.356 [2024-07-15 14:56:17.064397] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:43.356 14:56:17 nvmf_rdma.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:21:43.356 [2024-07-15 14:56:17.248750] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:21:43.614 14:56:17 nvmf_rdma.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:21:43.614 [2024-07-15 14:56:17.413278] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:21:43.614 14:56:17 nvmf_rdma.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:21:43.614 14:56:17 nvmf_rdma.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2919520 00:21:43.614 14:56:17 nvmf_rdma.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:43.614 14:56:17 nvmf_rdma.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2919520 /var/tmp/bdevperf.sock 00:21:43.614 14:56:17 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 2919520 ']' 00:21:43.614 14:56:17 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:43.614 14:56:17 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:43.614 14:56:17 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:43.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:43.614 14:56:17 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:43.614 14:56:17 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:44.548 14:56:18 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:44.548 14:56:18 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:21:44.548 14:56:18 nvmf_rdma.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:44.806 NVMe0n1 00:21:44.806 14:56:18 nvmf_rdma.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:45.064 00:21:45.064 14:56:18 nvmf_rdma.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2919755 00:21:45.064 14:56:18 nvmf_rdma.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:45.064 14:56:18 nvmf_rdma.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:21:45.995 14:56:19 nvmf_rdma.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:46.252 14:56:19 nvmf_rdma.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:21:49.533 14:56:22 nvmf_rdma.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:49.533 00:21:49.533 14:56:23 nvmf_rdma.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:21:49.533 14:56:23 nvmf_rdma.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:21:52.816 14:56:26 nvmf_rdma.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:52.816 [2024-07-15 14:56:26.604785] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:52.816 14:56:26 nvmf_rdma.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:21:53.749 14:56:27 nvmf_rdma.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:21:54.006 14:56:27 nvmf_rdma.nvmf_failover -- host/failover.sh@59 -- # wait 2919755 00:22:00.572 0 00:22:00.573 14:56:33 nvmf_rdma.nvmf_failover -- host/failover.sh@61 -- # killprocess 2919520 00:22:00.573 14:56:33 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 2919520 ']' 00:22:00.573 14:56:33 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 2919520 00:22:00.573 14:56:33 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:22:00.573 14:56:33 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:00.573 14:56:33 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2919520 00:22:00.573 14:56:33 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:00.573 14:56:33 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:00.573 14:56:33 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2919520' 00:22:00.573 killing process with pid 2919520 00:22:00.573 14:56:33 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@967 -- # kill 2919520 00:22:00.573 14:56:33 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@972 -- # wait 2919520 00:22:00.573 14:56:34 nvmf_rdma.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:00.573 [2024-07-15 14:56:17.468216] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:22:00.573 [2024-07-15 14:56:17.468266] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2919520 ] 00:22:00.573 EAL: No free 2048 kB hugepages reported on node 1 00:22:00.573 [2024-07-15 14:56:17.522555] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:00.573 [2024-07-15 14:56:17.598662] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:00.573 Running I/O for 15 seconds... 00:22:00.573 [2024-07-15 14:56:20.974170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:24120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.573 [2024-07-15 14:56:20.974212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.573 [2024-07-15 14:56:20.974231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:24128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.573 [2024-07-15 14:56:20.974238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.573 [2024-07-15 14:56:20.974247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:24136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.573 [2024-07-15 14:56:20.974254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.573 [2024-07-15 14:56:20.974262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:24144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.573 [2024-07-15 14:56:20.974268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.573 [2024-07-15 14:56:20.974276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:24152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.573 [2024-07-15 14:56:20.974283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.573 [2024-07-15 14:56:20.974290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.573 [2024-07-15 14:56:20.974297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.573 [2024-07-15 14:56:20.974304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:24168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.573 [2024-07-15 14:56:20.974310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.573 [2024-07-15 14:56:20.974318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:24176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.573 [2024-07-15 14:56:20.974325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.573 [2024-07-15 14:56:20.974333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:24184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.573 [2024-07-15 14:56:20.974339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.573 [2024-07-15 14:56:20.974346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:24192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.573 [2024-07-15 14:56:20.974353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.573 [2024-07-15 14:56:20.974360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:24200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.573 [2024-07-15 14:56:20.974372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.573 [2024-07-15 14:56:20.974380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:24208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.573 [2024-07-15 14:56:20.974387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.573 [2024-07-15 14:56:20.974395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.573 [2024-07-15 14:56:20.974401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.573 [2024-07-15 14:56:20.974408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:24224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.573 [2024-07-15 14:56:20.974415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.573 [2024-07-15 14:56:20.974422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:24232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.573 [2024-07-15 14:56:20.974429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.573 [2024-07-15 14:56:20.974438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:24240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.573 [2024-07-15 14:56:20.974444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.573 [2024-07-15 14:56:20.974453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:24248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.573 [2024-07-15 14:56:20.974459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.573 [2024-07-15 14:56:20.974467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:24256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.573 [2024-07-15 14:56:20.974474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.573 [2024-07-15 14:56:20.974482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:24264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.573 [2024-07-15 14:56:20.974488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.573 [2024-07-15 14:56:20.974496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:24272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.573 [2024-07-15 14:56:20.974502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.573 [2024-07-15 14:56:20.974510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.573 [2024-07-15 14:56:20.974517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.573 [2024-07-15 14:56:20.974524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:24288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.573 [2024-07-15 14:56:20.974531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.573 [2024-07-15 14:56:20.974542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:24296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.573 [2024-07-15 14:56:20.974549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.573 [2024-07-15 14:56:20.974558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:24304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.573 [2024-07-15 14:56:20.974564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.573 [2024-07-15 14:56:20.974572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:24312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.573 [2024-07-15 14:56:20.974578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.573 [2024-07-15 14:56:20.974586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:24320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.573 [2024-07-15 14:56:20.974593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.573 [2024-07-15 14:56:20.974601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:24328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.573 [2024-07-15 14:56:20.974607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.573 [2024-07-15 14:56:20.974615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:24336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.573 [2024-07-15 14:56:20.974621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.573 [2024-07-15 14:56:20.974628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:24344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.573 [2024-07-15 14:56:20.974635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.573 [2024-07-15 14:56:20.974642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:24352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.574 [2024-07-15 14:56:20.974649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.574 [2024-07-15 14:56:20.974657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:24360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.574 [2024-07-15 14:56:20.974664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.574 [2024-07-15 14:56:20.974672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:24368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.574 [2024-07-15 14:56:20.974678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.574 [2024-07-15 14:56:20.974686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:24376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.574 [2024-07-15 14:56:20.974692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.574 [2024-07-15 14:56:20.974700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:24384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.574 [2024-07-15 14:56:20.974706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.574 [2024-07-15 14:56:20.974714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:24392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.574 [2024-07-15 14:56:20.974720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.574 [2024-07-15 14:56:20.974728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.574 [2024-07-15 14:56:20.974736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.574 [2024-07-15 14:56:20.974743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:24408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.574 [2024-07-15 14:56:20.974750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.574 [2024-07-15 14:56:20.974757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:24416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.574 [2024-07-15 14:56:20.974763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.574 [2024-07-15 14:56:20.974771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:24424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.574 [2024-07-15 14:56:20.974777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.574 [2024-07-15 14:56:20.974785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:24432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.574 [2024-07-15 14:56:20.974791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.574 [2024-07-15 14:56:20.974798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:24440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.574 [2024-07-15 14:56:20.974804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.574 [2024-07-15 14:56:20.974812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:24448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.574 [2024-07-15 14:56:20.974818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.574 [2024-07-15 14:56:20.974826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.574 [2024-07-15 14:56:20.974832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.574 [2024-07-15 14:56:20.974840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:24464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.574 [2024-07-15 14:56:20.974846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.574 [2024-07-15 14:56:20.974854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:24472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.574 [2024-07-15 14:56:20.974862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.574 [2024-07-15 14:56:20.974870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:24480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.574 [2024-07-15 14:56:20.974876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.574 [2024-07-15 14:56:20.974884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:24488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.574 [2024-07-15 14:56:20.974890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.574 [2024-07-15 14:56:20.974898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:24496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.574 [2024-07-15 14:56:20.974906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.574 [2024-07-15 14:56:20.974914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:24504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.574 [2024-07-15 14:56:20.974920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.574 [2024-07-15 14:56:20.974933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:24512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.574 [2024-07-15 14:56:20.974939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.574 [2024-07-15 14:56:20.974947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:24520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.574 [2024-07-15 14:56:20.974953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.574 [2024-07-15 14:56:20.974961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:24528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.574 [2024-07-15 14:56:20.974967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.574 [2024-07-15 14:56:20.974975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.574 [2024-07-15 14:56:20.974981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.574 [2024-07-15 14:56:20.974989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:24544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.574 [2024-07-15 14:56:20.974995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.574 [2024-07-15 14:56:20.975003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:24552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.574 [2024-07-15 14:56:20.975009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.574 [2024-07-15 14:56:20.975016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.574 [2024-07-15 14:56:20.975023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.574 [2024-07-15 14:56:20.975030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:24568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.574 [2024-07-15 14:56:20.975036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.574 [2024-07-15 14:56:20.975045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:23552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fe000 len:0x1000 key:0x183f00 00:22:00.574 [2024-07-15 14:56:20.975052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.574 [2024-07-15 14:56:20.975061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fc000 len:0x1000 key:0x183f00 00:22:00.574 [2024-07-15 14:56:20.975067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.574 [2024-07-15 14:56:20.975075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:23568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fa000 len:0x1000 key:0x183f00 00:22:00.574 [2024-07-15 14:56:20.975081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.574 [2024-07-15 14:56:20.975091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:23576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f8000 len:0x1000 key:0x183f00 00:22:00.574 [2024-07-15 14:56:20.975097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.574 [2024-07-15 14:56:20.975105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:23584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f6000 len:0x1000 key:0x183f00 00:22:00.574 [2024-07-15 14:56:20.975111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.574 [2024-07-15 14:56:20.975119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:23592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f4000 len:0x1000 key:0x183f00 00:22:00.574 [2024-07-15 14:56:20.975125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.574 [2024-07-15 14:56:20.975133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:23600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f2000 len:0x1000 key:0x183f00 00:22:00.574 [2024-07-15 14:56:20.975140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.574 [2024-07-15 14:56:20.975148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:23608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f0000 len:0x1000 key:0x183f00 00:22:00.574 [2024-07-15 14:56:20.975155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.574 [2024-07-15 14:56:20.975163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:23616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ee000 len:0x1000 key:0x183f00 00:22:00.574 [2024-07-15 14:56:20.975170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.574 [2024-07-15 14:56:20.975177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:23624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ec000 len:0x1000 key:0x183f00 00:22:00.574 [2024-07-15 14:56:20.975183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.574 [2024-07-15 14:56:20.975191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:23632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ea000 len:0x1000 key:0x183f00 00:22:00.574 [2024-07-15 14:56:20.975198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.574 [2024-07-15 14:56:20.975206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e8000 len:0x1000 key:0x183f00 00:22:00.574 [2024-07-15 14:56:20.975213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.574 [2024-07-15 14:56:20.975221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:23648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e6000 len:0x1000 key:0x183f00 00:22:00.574 [2024-07-15 14:56:20.975227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.574 [2024-07-15 14:56:20.975235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e4000 len:0x1000 key:0x183f00 00:22:00.574 [2024-07-15 14:56:20.975241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.575 [2024-07-15 14:56:20.975250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:23664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e2000 len:0x1000 key:0x183f00 00:22:00.575 [2024-07-15 14:56:20.975256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.575 [2024-07-15 14:56:20.975264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:23672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e0000 len:0x1000 key:0x183f00 00:22:00.575 [2024-07-15 14:56:20.975271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.575 [2024-07-15 14:56:20.975280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:23680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075de000 len:0x1000 key:0x183f00 00:22:00.575 [2024-07-15 14:56:20.975287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.575 [2024-07-15 14:56:20.975294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:23688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075dc000 len:0x1000 key:0x183f00 00:22:00.575 [2024-07-15 14:56:20.975301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.575 [2024-07-15 14:56:20.975308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:23696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075da000 len:0x1000 key:0x183f00 00:22:00.575 [2024-07-15 14:56:20.975315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.575 [2024-07-15 14:56:20.975323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:23704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d8000 len:0x1000 key:0x183f00 00:22:00.575 [2024-07-15 14:56:20.975329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.575 [2024-07-15 14:56:20.975337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:23712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d6000 len:0x1000 key:0x183f00 00:22:00.575 [2024-07-15 14:56:20.975343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.575 [2024-07-15 14:56:20.975351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:23720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d4000 len:0x1000 key:0x183f00 00:22:00.575 [2024-07-15 14:56:20.975357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.575 [2024-07-15 14:56:20.975366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:23728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d2000 len:0x1000 key:0x183f00 00:22:00.575 [2024-07-15 14:56:20.975372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.575 [2024-07-15 14:56:20.975380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:23736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d0000 len:0x1000 key:0x183f00 00:22:00.575 [2024-07-15 14:56:20.975386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.575 [2024-07-15 14:56:20.975393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:23744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ce000 len:0x1000 key:0x183f00 00:22:00.575 [2024-07-15 14:56:20.975400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.575 [2024-07-15 14:56:20.975408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cc000 len:0x1000 key:0x183f00 00:22:00.575 [2024-07-15 14:56:20.975416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.575 [2024-07-15 14:56:20.975424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:23760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ca000 len:0x1000 key:0x183f00 00:22:00.575 [2024-07-15 14:56:20.975431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.575 [2024-07-15 14:56:20.975438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:23768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c8000 len:0x1000 key:0x183f00 00:22:00.575 [2024-07-15 14:56:20.975445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.575 [2024-07-15 14:56:20.975453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:23776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c6000 len:0x1000 key:0x183f00 00:22:00.575 [2024-07-15 14:56:20.975459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.575 [2024-07-15 14:56:20.975467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:23784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c4000 len:0x1000 key:0x183f00 00:22:00.575 [2024-07-15 14:56:20.975473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.575 [2024-07-15 14:56:20.975482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:23792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c2000 len:0x1000 key:0x183f00 00:22:00.575 [2024-07-15 14:56:20.975488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.575 [2024-07-15 14:56:20.975497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:23800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c0000 len:0x1000 key:0x183f00 00:22:00.575 [2024-07-15 14:56:20.975503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.575 [2024-07-15 14:56:20.975511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:23808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075be000 len:0x1000 key:0x183f00 00:22:00.575 [2024-07-15 14:56:20.975518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.575 [2024-07-15 14:56:20.975526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bc000 len:0x1000 key:0x183f00 00:22:00.575 [2024-07-15 14:56:20.975533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.575 [2024-07-15 14:56:20.975544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ba000 len:0x1000 key:0x183f00 00:22:00.575 [2024-07-15 14:56:20.975551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.575 [2024-07-15 14:56:20.975559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:23832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b8000 len:0x1000 key:0x183f00 00:22:00.575 [2024-07-15 14:56:20.975565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.575 [2024-07-15 14:56:20.975573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b6000 len:0x1000 key:0x183f00 00:22:00.575 [2024-07-15 14:56:20.975580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.575 [2024-07-15 14:56:20.975592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:23848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b4000 len:0x1000 key:0x183f00 00:22:00.575 [2024-07-15 14:56:20.975599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.575 [2024-07-15 14:56:20.975606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:23856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b2000 len:0x1000 key:0x183f00 00:22:00.575 [2024-07-15 14:56:20.975613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.575 [2024-07-15 14:56:20.975621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:23864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b0000 len:0x1000 key:0x183f00 00:22:00.575 [2024-07-15 14:56:20.975627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.575 [2024-07-15 14:56:20.975635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:23872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ae000 len:0x1000 key:0x183f00 00:22:00.575 [2024-07-15 14:56:20.975641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.575 [2024-07-15 14:56:20.975649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:23880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ac000 len:0x1000 key:0x183f00 00:22:00.575 [2024-07-15 14:56:20.975656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.575 [2024-07-15 14:56:20.975664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:23888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075aa000 len:0x1000 key:0x183f00 00:22:00.575 [2024-07-15 14:56:20.975670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.575 [2024-07-15 14:56:20.975678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:23896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a8000 len:0x1000 key:0x183f00 00:22:00.575 [2024-07-15 14:56:20.975684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.575 [2024-07-15 14:56:20.975692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:23904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a6000 len:0x1000 key:0x183f00 00:22:00.575 [2024-07-15 14:56:20.975699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.575 [2024-07-15 14:56:20.975707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a4000 len:0x1000 key:0x183f00 00:22:00.575 [2024-07-15 14:56:20.975714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.575 [2024-07-15 14:56:20.975722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:23920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a2000 len:0x1000 key:0x183f00 00:22:00.575 [2024-07-15 14:56:20.975728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.575 [2024-07-15 14:56:20.975736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:23928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a0000 len:0x1000 key:0x183f00 00:22:00.575 [2024-07-15 14:56:20.975743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.575 [2024-07-15 14:56:20.975754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:23936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x183f00 00:22:00.575 [2024-07-15 14:56:20.975761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.575 [2024-07-15 14:56:20.975769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x183f00 00:22:00.575 [2024-07-15 14:56:20.975776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.575 [2024-07-15 14:56:20.975783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:23952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x183f00 00:22:00.575 [2024-07-15 14:56:20.975790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.575 [2024-07-15 14:56:20.975798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:23960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x183f00 00:22:00.575 [2024-07-15 14:56:20.975805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.575 [2024-07-15 14:56:20.975813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:23968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x183f00 00:22:00.575 [2024-07-15 14:56:20.975819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.575 [2024-07-15 14:56:20.975827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x183f00 00:22:00.576 [2024-07-15 14:56:20.975834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.576 [2024-07-15 14:56:20.975842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:23984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x183f00 00:22:00.576 [2024-07-15 14:56:20.975848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.576 [2024-07-15 14:56:20.975856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:23992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x183f00 00:22:00.576 [2024-07-15 14:56:20.975863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.576 [2024-07-15 14:56:20.975870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x183f00 00:22:00.576 [2024-07-15 14:56:20.975876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.576 [2024-07-15 14:56:20.975885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:24008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x183f00 00:22:00.576 [2024-07-15 14:56:20.975891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.576 [2024-07-15 14:56:20.975899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:24016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x183f00 00:22:00.576 [2024-07-15 14:56:20.975905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.576 [2024-07-15 14:56:20.975915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:24024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x183f00 00:22:00.576 [2024-07-15 14:56:20.975923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.576 [2024-07-15 14:56:20.975931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x183f00 00:22:00.576 [2024-07-15 14:56:20.975938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.576 [2024-07-15 14:56:20.975946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:24040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x183f00 00:22:00.576 [2024-07-15 14:56:20.975952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.576 [2024-07-15 14:56:20.975960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:24048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x183f00 00:22:00.576 [2024-07-15 14:56:20.975967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.576 [2024-07-15 14:56:20.975975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:24056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x183f00 00:22:00.576 [2024-07-15 14:56:20.975982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.576 [2024-07-15 14:56:20.975989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:24064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x183f00 00:22:00.576 [2024-07-15 14:56:20.975995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.576 [2024-07-15 14:56:20.976003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:24072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x183f00 00:22:00.576 [2024-07-15 14:56:20.976010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.576 [2024-07-15 14:56:20.976017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x183f00 00:22:00.576 [2024-07-15 14:56:20.976024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.576 [2024-07-15 14:56:20.976032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:24088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x183f00 00:22:00.576 [2024-07-15 14:56:20.976039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.576 [2024-07-15 14:56:20.976046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:24096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x183f00 00:22:00.576 [2024-07-15 14:56:20.976052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.576 [2024-07-15 14:56:20.976060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x183f00 00:22:00.576 [2024-07-15 14:56:20.976067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.576 [2024-07-15 14:56:20.977951] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:00.576 [2024-07-15 14:56:20.977964] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:00.576 [2024-07-15 14:56:20.977971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24112 len:8 PRP1 0x0 PRP2 0x0 00:22:00.576 [2024-07-15 14:56:20.977980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.576 [2024-07-15 14:56:20.978021] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4980 was disconnected and freed. reset controller. 00:22:00.576 [2024-07-15 14:56:20.978030] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:22:00.576 [2024-07-15 14:56:20.978037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:00.576 [2024-07-15 14:56:20.980862] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:00.576 [2024-07-15 14:56:20.995523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:22:00.576 [2024-07-15 14:56:21.045609] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:00.576 [2024-07-15 14:56:24.432383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:113664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.576 [2024-07-15 14:56:24.432422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.576 [2024-07-15 14:56:24.432438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:113672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.576 [2024-07-15 14:56:24.432445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.576 [2024-07-15 14:56:24.432454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:113680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.576 [2024-07-15 14:56:24.432461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.576 [2024-07-15 14:56:24.432469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:113688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.576 [2024-07-15 14:56:24.432475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.576 [2024-07-15 14:56:24.432482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:113696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.576 [2024-07-15 14:56:24.432489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.576 [2024-07-15 14:56:24.432496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:113704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.576 [2024-07-15 14:56:24.432502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.576 [2024-07-15 14:56:24.432510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:113712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.576 [2024-07-15 14:56:24.432516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.576 [2024-07-15 14:56:24.432524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:113272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x183f00 00:22:00.576 [2024-07-15 14:56:24.432531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.576 [2024-07-15 14:56:24.432543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:113280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x183f00 00:22:00.576 [2024-07-15 14:56:24.432550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.576 [2024-07-15 14:56:24.432565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:113288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x183f00 00:22:00.576 [2024-07-15 14:56:24.432572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.576 [2024-07-15 14:56:24.432580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:113296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x183f00 00:22:00.576 [2024-07-15 14:56:24.432586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.576 [2024-07-15 14:56:24.432594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:113304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x183f00 00:22:00.576 [2024-07-15 14:56:24.432601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.576 [2024-07-15 14:56:24.432609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:113312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x183f00 00:22:00.576 [2024-07-15 14:56:24.432615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.576 [2024-07-15 14:56:24.432623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:113320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x183f00 00:22:00.576 [2024-07-15 14:56:24.432629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.576 [2024-07-15 14:56:24.432637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:113328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x183f00 00:22:00.576 [2024-07-15 14:56:24.432643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.576 [2024-07-15 14:56:24.432651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:113720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.576 [2024-07-15 14:56:24.432657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.576 [2024-07-15 14:56:24.432665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:113728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.576 [2024-07-15 14:56:24.432673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.576 [2024-07-15 14:56:24.432681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:113736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.576 [2024-07-15 14:56:24.432687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.576 [2024-07-15 14:56:24.432695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:113744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.576 [2024-07-15 14:56:24.432701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.576 [2024-07-15 14:56:24.432709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:113752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.577 [2024-07-15 14:56:24.432715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.577 [2024-07-15 14:56:24.432723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:113760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.577 [2024-07-15 14:56:24.432729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.577 [2024-07-15 14:56:24.432740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:113768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.577 [2024-07-15 14:56:24.432746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.577 [2024-07-15 14:56:24.432754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:113776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.577 [2024-07-15 14:56:24.432760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.577 [2024-07-15 14:56:24.432768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:113784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.577 [2024-07-15 14:56:24.432775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.577 [2024-07-15 14:56:24.432783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:113792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.577 [2024-07-15 14:56:24.432789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.577 [2024-07-15 14:56:24.432797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:113800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.577 [2024-07-15 14:56:24.432803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.577 [2024-07-15 14:56:24.432811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:113808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.577 [2024-07-15 14:56:24.432817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.577 [2024-07-15 14:56:24.432824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:113816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.577 [2024-07-15 14:56:24.432831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.577 [2024-07-15 14:56:24.432838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:113824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.577 [2024-07-15 14:56:24.432844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.577 [2024-07-15 14:56:24.432852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:113832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.577 [2024-07-15 14:56:24.432858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.577 [2024-07-15 14:56:24.432866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:113840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.577 [2024-07-15 14:56:24.432872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.577 [2024-07-15 14:56:24.432879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:113848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.577 [2024-07-15 14:56:24.432886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.577 [2024-07-15 14:56:24.432894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:113856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.577 [2024-07-15 14:56:24.432900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.577 [2024-07-15 14:56:24.432909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:113864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.577 [2024-07-15 14:56:24.432915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.577 [2024-07-15 14:56:24.432922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:113872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.577 [2024-07-15 14:56:24.432928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.577 [2024-07-15 14:56:24.432936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:113880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.577 [2024-07-15 14:56:24.432942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.577 [2024-07-15 14:56:24.432950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:113888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.577 [2024-07-15 14:56:24.432956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.577 [2024-07-15 14:56:24.432964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:113896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.577 [2024-07-15 14:56:24.432970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.577 [2024-07-15 14:56:24.432978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:113904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.577 [2024-07-15 14:56:24.432984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.577 [2024-07-15 14:56:24.432992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:113336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x183f00 00:22:00.577 [2024-07-15 14:56:24.432998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.577 [2024-07-15 14:56:24.433006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:113344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x183f00 00:22:00.577 [2024-07-15 14:56:24.433012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.577 [2024-07-15 14:56:24.433020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:113352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x183f00 00:22:00.577 [2024-07-15 14:56:24.433026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.577 [2024-07-15 14:56:24.433034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:113360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x183f00 00:22:00.577 [2024-07-15 14:56:24.433040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.577 [2024-07-15 14:56:24.433048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:113368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x183f00 00:22:00.577 [2024-07-15 14:56:24.433054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.577 [2024-07-15 14:56:24.433063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:113376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x183f00 00:22:00.577 [2024-07-15 14:56:24.433069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.577 [2024-07-15 14:56:24.433079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:113384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x183f00 00:22:00.577 [2024-07-15 14:56:24.433085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.577 [2024-07-15 14:56:24.433093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:113392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x183f00 00:22:00.577 [2024-07-15 14:56:24.433099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.578 [2024-07-15 14:56:24.433106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:113912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.578 [2024-07-15 14:56:24.433112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.578 [2024-07-15 14:56:24.433120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:113920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.578 [2024-07-15 14:56:24.433126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.578 [2024-07-15 14:56:24.433133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:113928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.578 [2024-07-15 14:56:24.433139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.578 [2024-07-15 14:56:24.433147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:113936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.578 [2024-07-15 14:56:24.433153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.578 [2024-07-15 14:56:24.433160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:113944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.578 [2024-07-15 14:56:24.433166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.578 [2024-07-15 14:56:24.433174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:113952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.578 [2024-07-15 14:56:24.433180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.578 [2024-07-15 14:56:24.433188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:113960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.578 [2024-07-15 14:56:24.433194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.578 [2024-07-15 14:56:24.433201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:113968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.578 [2024-07-15 14:56:24.433207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.578 [2024-07-15 14:56:24.433215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:113400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c0000 len:0x1000 key:0x183f00 00:22:00.578 [2024-07-15 14:56:24.433221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.578 [2024-07-15 14:56:24.433229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:113408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075be000 len:0x1000 key:0x183f00 00:22:00.578 [2024-07-15 14:56:24.433235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.578 [2024-07-15 14:56:24.433245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:113416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bc000 len:0x1000 key:0x183f00 00:22:00.578 [2024-07-15 14:56:24.433252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.578 [2024-07-15 14:56:24.433260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:113424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ba000 len:0x1000 key:0x183f00 00:22:00.578 [2024-07-15 14:56:24.433266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.578 [2024-07-15 14:56:24.433274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:113432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b8000 len:0x1000 key:0x183f00 00:22:00.578 [2024-07-15 14:56:24.433281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.578 [2024-07-15 14:56:24.433289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:113440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b6000 len:0x1000 key:0x183f00 00:22:00.578 [2024-07-15 14:56:24.433296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.578 [2024-07-15 14:56:24.433304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:113448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b4000 len:0x1000 key:0x183f00 00:22:00.578 [2024-07-15 14:56:24.433310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.578 [2024-07-15 14:56:24.433318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:113456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b2000 len:0x1000 key:0x183f00 00:22:00.578 [2024-07-15 14:56:24.433324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.578 [2024-07-15 14:56:24.433331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:113976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.578 [2024-07-15 14:56:24.433338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.578 [2024-07-15 14:56:24.433345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:113984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.578 [2024-07-15 14:56:24.433352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.578 [2024-07-15 14:56:24.433359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:113992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.578 [2024-07-15 14:56:24.433365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.578 [2024-07-15 14:56:24.433373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:114000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.578 [2024-07-15 14:56:24.433379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.578 [2024-07-15 14:56:24.433387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:114008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.578 [2024-07-15 14:56:24.433394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.578 [2024-07-15 14:56:24.433402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:114016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.578 [2024-07-15 14:56:24.433409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.578 [2024-07-15 14:56:24.433417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:114024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.578 [2024-07-15 14:56:24.433423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.578 [2024-07-15 14:56:24.433430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:114032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.578 [2024-07-15 14:56:24.433436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.578 [2024-07-15 14:56:24.433444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:113464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e2000 len:0x1000 key:0x183f00 00:22:00.578 [2024-07-15 14:56:24.433451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.578 [2024-07-15 14:56:24.433458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:113472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e4000 len:0x1000 key:0x183f00 00:22:00.578 [2024-07-15 14:56:24.433465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.578 [2024-07-15 14:56:24.433472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:113480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e6000 len:0x1000 key:0x183f00 00:22:00.578 [2024-07-15 14:56:24.433478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.578 [2024-07-15 14:56:24.433486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:113488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e8000 len:0x1000 key:0x183f00 00:22:00.578 [2024-07-15 14:56:24.433492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.578 [2024-07-15 14:56:24.433500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:113496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ea000 len:0x1000 key:0x183f00 00:22:00.578 [2024-07-15 14:56:24.433506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.578 [2024-07-15 14:56:24.433514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:113504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ec000 len:0x1000 key:0x183f00 00:22:00.578 [2024-07-15 14:56:24.433520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.578 [2024-07-15 14:56:24.433528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:113512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ee000 len:0x1000 key:0x183f00 00:22:00.578 [2024-07-15 14:56:24.433534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.578 [2024-07-15 14:56:24.433545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:113520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f0000 len:0x1000 key:0x183f00 00:22:00.578 [2024-07-15 14:56:24.433551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.578 [2024-07-15 14:56:24.433559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.578 [2024-07-15 14:56:24.433566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.578 [2024-07-15 14:56:24.433573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:114048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.578 [2024-07-15 14:56:24.433582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.578 [2024-07-15 14:56:24.433591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:114056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.578 [2024-07-15 14:56:24.433597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.578 [2024-07-15 14:56:24.433605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:114064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.578 [2024-07-15 14:56:24.433611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.578 [2024-07-15 14:56:24.433618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:114072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.578 [2024-07-15 14:56:24.433625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.578 [2024-07-15 14:56:24.433633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:114080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.578 [2024-07-15 14:56:24.433639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.578 [2024-07-15 14:56:24.433648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:114088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.578 [2024-07-15 14:56:24.433654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.578 [2024-07-15 14:56:24.433662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:114096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.578 [2024-07-15 14:56:24.433668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.578 [2024-07-15 14:56:24.433675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:114104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.578 [2024-07-15 14:56:24.433682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.578 [2024-07-15 14:56:24.433689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:114112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.579 [2024-07-15 14:56:24.433695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.579 [2024-07-15 14:56:24.433703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:114120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.579 [2024-07-15 14:56:24.433709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.579 [2024-07-15 14:56:24.433717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:114128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.579 [2024-07-15 14:56:24.433723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.579 [2024-07-15 14:56:24.433730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:114136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.579 [2024-07-15 14:56:24.433737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.579 [2024-07-15 14:56:24.433745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:114144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.579 [2024-07-15 14:56:24.433752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.579 [2024-07-15 14:56:24.433760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:114152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.579 [2024-07-15 14:56:24.433766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.579 [2024-07-15 14:56:24.433774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:114160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.579 [2024-07-15 14:56:24.433780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.579 [2024-07-15 14:56:24.433788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:113528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d0000 len:0x1000 key:0x183f00 00:22:00.579 [2024-07-15 14:56:24.433794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.579 [2024-07-15 14:56:24.433802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:113536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ce000 len:0x1000 key:0x183f00 00:22:00.579 [2024-07-15 14:56:24.433809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.579 [2024-07-15 14:56:24.433816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:113544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cc000 len:0x1000 key:0x183f00 00:22:00.579 [2024-07-15 14:56:24.433823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.579 [2024-07-15 14:56:24.433830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:113552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ca000 len:0x1000 key:0x183f00 00:22:00.579 [2024-07-15 14:56:24.433837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.579 [2024-07-15 14:56:24.433847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:113560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c8000 len:0x1000 key:0x183f00 00:22:00.579 [2024-07-15 14:56:24.433854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.579 [2024-07-15 14:56:24.433862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:113568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c6000 len:0x1000 key:0x183f00 00:22:00.579 [2024-07-15 14:56:24.433868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.579 [2024-07-15 14:56:24.433877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:113576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c4000 len:0x1000 key:0x183f00 00:22:00.579 [2024-07-15 14:56:24.433883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.579 [2024-07-15 14:56:24.433891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:113584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c2000 len:0x1000 key:0x183f00 00:22:00.579 [2024-07-15 14:56:24.433897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.579 [2024-07-15 14:56:24.433905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:114168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.579 [2024-07-15 14:56:24.433912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.579 [2024-07-15 14:56:24.433919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:114176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.579 [2024-07-15 14:56:24.433927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.579 [2024-07-15 14:56:24.433934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:114184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.579 [2024-07-15 14:56:24.433941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.579 [2024-07-15 14:56:24.433948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:114192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.579 [2024-07-15 14:56:24.433955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.579 [2024-07-15 14:56:24.433962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:114200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.579 [2024-07-15 14:56:24.433969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.579 [2024-07-15 14:56:24.433976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:114208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.579 [2024-07-15 14:56:24.433982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.579 [2024-07-15 14:56:24.433990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:114216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.579 [2024-07-15 14:56:24.433996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.579 [2024-07-15 14:56:24.434004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:114224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.579 [2024-07-15 14:56:24.434010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.579 [2024-07-15 14:56:24.434018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:113592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x183f00 00:22:00.579 [2024-07-15 14:56:24.434025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.579 [2024-07-15 14:56:24.434032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:113600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x183f00 00:22:00.579 [2024-07-15 14:56:24.434039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.579 [2024-07-15 14:56:24.434047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:113608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x183f00 00:22:00.579 [2024-07-15 14:56:24.434053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.579 [2024-07-15 14:56:24.434061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:113616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x183f00 00:22:00.579 [2024-07-15 14:56:24.434067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.579 [2024-07-15 14:56:24.434075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:113624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x183f00 00:22:00.579 [2024-07-15 14:56:24.434083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.579 [2024-07-15 14:56:24.434092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:113632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x183f00 00:22:00.579 [2024-07-15 14:56:24.434098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.579 [2024-07-15 14:56:24.434106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:113640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x183f00 00:22:00.579 [2024-07-15 14:56:24.434112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.579 [2024-07-15 14:56:24.434120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:113648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x183f00 00:22:00.579 [2024-07-15 14:56:24.434127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.579 [2024-07-15 14:56:24.434134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:114232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.579 [2024-07-15 14:56:24.434141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.579 [2024-07-15 14:56:24.434148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:114240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.579 [2024-07-15 14:56:24.434155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.579 [2024-07-15 14:56:24.434164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:114248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.579 [2024-07-15 14:56:24.434170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.579 [2024-07-15 14:56:24.434178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:114256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.579 [2024-07-15 14:56:24.434184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.579 [2024-07-15 14:56:24.434191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:114264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.579 [2024-07-15 14:56:24.434198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.579 [2024-07-15 14:56:24.434205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:114272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.579 [2024-07-15 14:56:24.434211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.580 [2024-07-15 14:56:24.434219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:114280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.580 [2024-07-15 14:56:24.434225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.580 [2024-07-15 14:56:24.434232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:114288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.580 [2024-07-15 14:56:24.434239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.580 [2024-07-15 14:56:24.436207] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:00.580 [2024-07-15 14:56:24.436220] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:00.580 [2024-07-15 14:56:24.436226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113656 len:8 PRP1 0x0 PRP2 0x0 00:22:00.580 [2024-07-15 14:56:24.436235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.580 [2024-07-15 14:56:24.436271] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e48c0 was disconnected and freed. reset controller. 00:22:00.580 [2024-07-15 14:56:24.436279] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4421 to 192.168.100.8:4422 00:22:00.580 [2024-07-15 14:56:24.436287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:00.580 [2024-07-15 14:56:24.439075] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:00.580 [2024-07-15 14:56:24.453659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:22:00.580 [2024-07-15 14:56:24.502373] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:00.580 [2024-07-15 14:56:28.797698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:76536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b2000 len:0x1000 key:0x183f00 00:22:00.580 [2024-07-15 14:56:28.797733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.580 [2024-07-15 14:56:28.797750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:76544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b4000 len:0x1000 key:0x183f00 00:22:00.580 [2024-07-15 14:56:28.797757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.580 [2024-07-15 14:56:28.797766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:76552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b6000 len:0x1000 key:0x183f00 00:22:00.580 [2024-07-15 14:56:28.797773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.580 [2024-07-15 14:56:28.797782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:76560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b8000 len:0x1000 key:0x183f00 00:22:00.580 [2024-07-15 14:56:28.797788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.580 [2024-07-15 14:56:28.797797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:76568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ba000 len:0x1000 key:0x183f00 00:22:00.580 [2024-07-15 14:56:28.797803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.580 [2024-07-15 14:56:28.797811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:76576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bc000 len:0x1000 key:0x183f00 00:22:00.580 [2024-07-15 14:56:28.797817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.580 [2024-07-15 14:56:28.797825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:76584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075be000 len:0x1000 key:0x183f00 00:22:00.580 [2024-07-15 14:56:28.797831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.580 [2024-07-15 14:56:28.797839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:76912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.580 [2024-07-15 14:56:28.797845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.580 [2024-07-15 14:56:28.797853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:76920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.580 [2024-07-15 14:56:28.797859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.580 [2024-07-15 14:56:28.797872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:76592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f0000 len:0x1000 key:0x183f00 00:22:00.580 [2024-07-15 14:56:28.797879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.580 [2024-07-15 14:56:28.797887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:76600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d0000 len:0x1000 key:0x183f00 00:22:00.580 [2024-07-15 14:56:28.797893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.580 [2024-07-15 14:56:28.797901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:76608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c0000 len:0x1000 key:0x183f00 00:22:00.580 [2024-07-15 14:56:28.797908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.580 [2024-07-15 14:56:28.797916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x183f00 00:22:00.580 [2024-07-15 14:56:28.797922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.580 [2024-07-15 14:56:28.797930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:76624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x183f00 00:22:00.580 [2024-07-15 14:56:28.797937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.580 [2024-07-15 14:56:28.797945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:76632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x183f00 00:22:00.580 [2024-07-15 14:56:28.797951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.580 [2024-07-15 14:56:28.797959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:76640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x183f00 00:22:00.580 [2024-07-15 14:56:28.797967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.580 [2024-07-15 14:56:28.797975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:76648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x183f00 00:22:00.580 [2024-07-15 14:56:28.797981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.580 [2024-07-15 14:56:28.797989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:76656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x183f00 00:22:00.580 [2024-07-15 14:56:28.797995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.580 [2024-07-15 14:56:28.798003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:76664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x183f00 00:22:00.580 [2024-07-15 14:56:28.798010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.580 [2024-07-15 14:56:28.798017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:76672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x183f00 00:22:00.580 [2024-07-15 14:56:28.798024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.580 [2024-07-15 14:56:28.798034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:76680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x183f00 00:22:00.580 [2024-07-15 14:56:28.798040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.580 [2024-07-15 14:56:28.798048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:76688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x183f00 00:22:00.580 [2024-07-15 14:56:28.798055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.580 [2024-07-15 14:56:28.798064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:76696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x183f00 00:22:00.580 [2024-07-15 14:56:28.798070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.580 [2024-07-15 14:56:28.798078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:76704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x183f00 00:22:00.580 [2024-07-15 14:56:28.798084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.580 [2024-07-15 14:56:28.798092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:76712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x183f00 00:22:00.580 [2024-07-15 14:56:28.798098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.580 [2024-07-15 14:56:28.798106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:76928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.580 [2024-07-15 14:56:28.798112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.580 [2024-07-15 14:56:28.798120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:76936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.580 [2024-07-15 14:56:28.798126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.580 [2024-07-15 14:56:28.798134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:76944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.580 [2024-07-15 14:56:28.798140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.580 [2024-07-15 14:56:28.798149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:76952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.580 [2024-07-15 14:56:28.798155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.580 [2024-07-15 14:56:28.798163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:76960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.580 [2024-07-15 14:56:28.798169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.580 [2024-07-15 14:56:28.798177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:76968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.580 [2024-07-15 14:56:28.798183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.580 [2024-07-15 14:56:28.798190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:76976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.580 [2024-07-15 14:56:28.798196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.580 [2024-07-15 14:56:28.798205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:76984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.580 [2024-07-15 14:56:28.798211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.580 [2024-07-15 14:56:28.798219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:76992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.580 [2024-07-15 14:56:28.798225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.581 [2024-07-15 14:56:28.798233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:77000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.581 [2024-07-15 14:56:28.798238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.581 [2024-07-15 14:56:28.798246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:77008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.581 [2024-07-15 14:56:28.798252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.581 [2024-07-15 14:56:28.798260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:77016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.581 [2024-07-15 14:56:28.798266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.581 [2024-07-15 14:56:28.798274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:77024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.581 [2024-07-15 14:56:28.798281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.581 [2024-07-15 14:56:28.798288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:77032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.581 [2024-07-15 14:56:28.798294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.581 [2024-07-15 14:56:28.798302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:77040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.581 [2024-07-15 14:56:28.798307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.581 [2024-07-15 14:56:28.798315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:77048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.581 [2024-07-15 14:56:28.798321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.581 [2024-07-15 14:56:28.798328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:77056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.581 [2024-07-15 14:56:28.798334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.581 [2024-07-15 14:56:28.798341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.581 [2024-07-15 14:56:28.798347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.581 [2024-07-15 14:56:28.798355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:77072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.581 [2024-07-15 14:56:28.798361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.581 [2024-07-15 14:56:28.798369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:77080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.581 [2024-07-15 14:56:28.798377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.581 [2024-07-15 14:56:28.798384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:77088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.581 [2024-07-15 14:56:28.798390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.581 [2024-07-15 14:56:28.798398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:77096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.581 [2024-07-15 14:56:28.798404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.581 [2024-07-15 14:56:28.798412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:77104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.581 [2024-07-15 14:56:28.798418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.581 [2024-07-15 14:56:28.798425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:77112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.581 [2024-07-15 14:56:28.798431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.581 [2024-07-15 14:56:28.798439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:76720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e6000 len:0x1000 key:0x183f00 00:22:00.581 [2024-07-15 14:56:28.798445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.581 [2024-07-15 14:56:28.798452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:77120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.581 [2024-07-15 14:56:28.798458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.581 [2024-07-15 14:56:28.798467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.581 [2024-07-15 14:56:28.798474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.581 [2024-07-15 14:56:28.798482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:77136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.581 [2024-07-15 14:56:28.798488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.581 [2024-07-15 14:56:28.798495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.581 [2024-07-15 14:56:28.798501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.581 [2024-07-15 14:56:28.798509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:77152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.581 [2024-07-15 14:56:28.798515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.581 [2024-07-15 14:56:28.798522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:77160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.581 [2024-07-15 14:56:28.798528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.581 [2024-07-15 14:56:28.798536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:77168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.581 [2024-07-15 14:56:28.798545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.581 [2024-07-15 14:56:28.798555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:77176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.581 [2024-07-15 14:56:28.798562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.581 [2024-07-15 14:56:28.798569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:77184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.581 [2024-07-15 14:56:28.798575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.581 [2024-07-15 14:56:28.798583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:77192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.581 [2024-07-15 14:56:28.798589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.581 [2024-07-15 14:56:28.798596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:77200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.581 [2024-07-15 14:56:28.798603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.581 [2024-07-15 14:56:28.798611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:77208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.581 [2024-07-15 14:56:28.798618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.581 [2024-07-15 14:56:28.798626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:77216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.581 [2024-07-15 14:56:28.798632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.581 [2024-07-15 14:56:28.798639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:77224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.581 [2024-07-15 14:56:28.798646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.582 [2024-07-15 14:56:28.798654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:77232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.582 [2024-07-15 14:56:28.798660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.582 [2024-07-15 14:56:28.798668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:77240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.582 [2024-07-15 14:56:28.798674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.582 [2024-07-15 14:56:28.798682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:77248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.582 [2024-07-15 14:56:28.798688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.582 [2024-07-15 14:56:28.798695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:77256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.582 [2024-07-15 14:56:28.798702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.582 [2024-07-15 14:56:28.798709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:77264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.582 [2024-07-15 14:56:28.798715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.582 [2024-07-15 14:56:28.798724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:77272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.582 [2024-07-15 14:56:28.798730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.582 [2024-07-15 14:56:28.798738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:77280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.582 [2024-07-15 14:56:28.798744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.582 [2024-07-15 14:56:28.798751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:77288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.582 [2024-07-15 14:56:28.798757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.582 [2024-07-15 14:56:28.798765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:77296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.582 [2024-07-15 14:56:28.798771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.582 [2024-07-15 14:56:28.798779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:77304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.582 [2024-07-15 14:56:28.798785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.582 [2024-07-15 14:56:28.798793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:77312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.582 [2024-07-15 14:56:28.798799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.582 [2024-07-15 14:56:28.798807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.582 [2024-07-15 14:56:28.798813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.582 [2024-07-15 14:56:28.798821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:77328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.582 [2024-07-15 14:56:28.798827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.582 [2024-07-15 14:56:28.798836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:77336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.582 [2024-07-15 14:56:28.798842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.582 [2024-07-15 14:56:28.798849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:77344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.582 [2024-07-15 14:56:28.798855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.582 [2024-07-15 14:56:28.798862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:77352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.582 [2024-07-15 14:56:28.798868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.582 [2024-07-15 14:56:28.798876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:77360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.582 [2024-07-15 14:56:28.798882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.582 [2024-07-15 14:56:28.798889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:77368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.582 [2024-07-15 14:56:28.798897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.582 [2024-07-15 14:56:28.798905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:76728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x183f00 00:22:00.582 [2024-07-15 14:56:28.798912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.582 [2024-07-15 14:56:28.798919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:76736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e0000 len:0x1000 key:0x183f00 00:22:00.582 [2024-07-15 14:56:28.798926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.582 [2024-07-15 14:56:28.798933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:76744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x183f00 00:22:00.582 [2024-07-15 14:56:28.798939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.582 [2024-07-15 14:56:28.798947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:76752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x183f00 00:22:00.582 [2024-07-15 14:56:28.798954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.582 [2024-07-15 14:56:28.798961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:76760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x183f00 00:22:00.582 [2024-07-15 14:56:28.798967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.582 [2024-07-15 14:56:28.798975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:76768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x183f00 00:22:00.582 [2024-07-15 14:56:28.798981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.582 [2024-07-15 14:56:28.798989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:76776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x183f00 00:22:00.582 [2024-07-15 14:56:28.798995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.582 [2024-07-15 14:56:28.799004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:77376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.582 [2024-07-15 14:56:28.799011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.582 [2024-07-15 14:56:28.799019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:77384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.582 [2024-07-15 14:56:28.799025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.582 [2024-07-15 14:56:28.799032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:77392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.582 [2024-07-15 14:56:28.799039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.582 [2024-07-15 14:56:28.799046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:77400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.582 [2024-07-15 14:56:28.799052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.582 [2024-07-15 14:56:28.799060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:77408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.582 [2024-07-15 14:56:28.799071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.582 [2024-07-15 14:56:28.799079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:77416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.582 [2024-07-15 14:56:28.799085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.582 [2024-07-15 14:56:28.799093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:76784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x183f00 00:22:00.582 [2024-07-15 14:56:28.799099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.582 [2024-07-15 14:56:28.799107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:76792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x183f00 00:22:00.582 [2024-07-15 14:56:28.799113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.582 [2024-07-15 14:56:28.799123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:76800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x183f00 00:22:00.582 [2024-07-15 14:56:28.799130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.582 [2024-07-15 14:56:28.799137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:76808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x183f00 00:22:00.582 [2024-07-15 14:56:28.799143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.582 [2024-07-15 14:56:28.799151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:76816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x183f00 00:22:00.582 [2024-07-15 14:56:28.799157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.582 [2024-07-15 14:56:28.799166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:76824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x183f00 00:22:00.582 [2024-07-15 14:56:28.799172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.582 [2024-07-15 14:56:28.799180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:76832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d4000 len:0x1000 key:0x183f00 00:22:00.582 [2024-07-15 14:56:28.799186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.582 [2024-07-15 14:56:28.799194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:76840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d2000 len:0x1000 key:0x183f00 00:22:00.582 [2024-07-15 14:56:28.799200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.582 [2024-07-15 14:56:28.799208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:76848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x183f00 00:22:00.582 [2024-07-15 14:56:28.799215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.582 [2024-07-15 14:56:28.799223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:76856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fe000 len:0x1000 key:0x183f00 00:22:00.582 [2024-07-15 14:56:28.799231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.583 [2024-07-15 14:56:28.799239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:76864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fc000 len:0x1000 key:0x183f00 00:22:00.583 [2024-07-15 14:56:28.799245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.583 [2024-07-15 14:56:28.799253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:76872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fa000 len:0x1000 key:0x183f00 00:22:00.583 [2024-07-15 14:56:28.799259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.583 [2024-07-15 14:56:28.799267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f8000 len:0x1000 key:0x183f00 00:22:00.583 [2024-07-15 14:56:28.799274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.583 [2024-07-15 14:56:28.799281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:76888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f6000 len:0x1000 key:0x183f00 00:22:00.583 [2024-07-15 14:56:28.799288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.583 [2024-07-15 14:56:28.799296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f4000 len:0x1000 key:0x183f00 00:22:00.583 [2024-07-15 14:56:28.799302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.583 [2024-07-15 14:56:28.799310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:76904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f2000 len:0x1000 key:0x183f00 00:22:00.583 [2024-07-15 14:56:28.799316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.583 [2024-07-15 14:56:28.799324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:77424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.583 [2024-07-15 14:56:28.799330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.583 [2024-07-15 14:56:28.799338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:77432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.583 [2024-07-15 14:56:28.799344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.583 [2024-07-15 14:56:28.799351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:77440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.583 [2024-07-15 14:56:28.799358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.583 [2024-07-15 14:56:28.799365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:77448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.583 [2024-07-15 14:56:28.799371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.583 [2024-07-15 14:56:28.799379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:77456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.583 [2024-07-15 14:56:28.799385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.583 [2024-07-15 14:56:28.799393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:77464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.583 [2024-07-15 14:56:28.799400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.583 [2024-07-15 14:56:28.799408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:77472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.583 [2024-07-15 14:56:28.799414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.583 [2024-07-15 14:56:28.799422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:77480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.583 [2024-07-15 14:56:28.799428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.583 [2024-07-15 14:56:28.799436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:77488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.583 [2024-07-15 14:56:28.799442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.583 [2024-07-15 14:56:28.799451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:77496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.583 [2024-07-15 14:56:28.799457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.583 [2024-07-15 14:56:28.799464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:77504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.583 [2024-07-15 14:56:28.799470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.583 [2024-07-15 14:56:28.799478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:77512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.583 [2024-07-15 14:56:28.799485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.583 [2024-07-15 14:56:28.799492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:77520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.583 [2024-07-15 14:56:28.799499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.583 [2024-07-15 14:56:28.799507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:77528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.583 [2024-07-15 14:56:28.799513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.583 [2024-07-15 14:56:28.799520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:77536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.583 [2024-07-15 14:56:28.799527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.583 [2024-07-15 14:56:28.799534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:77544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:00.583 [2024-07-15 14:56:28.799544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:84ff7000 sqhd:52b0 p:0 m:0 dnr:0 00:22:00.583 [2024-07-15 14:56:28.801439] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:00.583 [2024-07-15 14:56:28.801450] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:00.583 [2024-07-15 14:56:28.801456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77552 len:8 PRP1 0x0 PRP2 0x0 00:22:00.583 [2024-07-15 14:56:28.801463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:00.583 [2024-07-15 14:56:28.801498] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e48c0 was disconnected and freed. reset controller. 00:22:00.583 [2024-07-15 14:56:28.801510] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4422 to 192.168.100.8:4420 00:22:00.583 [2024-07-15 14:56:28.801518] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:00.583 [2024-07-15 14:56:28.804298] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:00.583 [2024-07-15 14:56:28.818574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:22:00.583 [2024-07-15 14:56:28.866726] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:00.583 00:22:00.583 Latency(us) 00:22:00.583 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:00.583 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:00.583 Verification LBA range: start 0x0 length 0x4000 00:22:00.583 NVMe0n1 : 15.01 14105.91 55.10 350.07 0.00 8832.92 351.09 1018616.69 00:22:00.583 =================================================================================================================== 00:22:00.583 Total : 14105.91 55.10 350.07 0.00 8832.92 351.09 1018616.69 00:22:00.583 Received shutdown signal, test time was about 15.000000 seconds 00:22:00.583 00:22:00.583 Latency(us) 00:22:00.583 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:00.583 =================================================================================================================== 00:22:00.583 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:00.583 14:56:34 nvmf_rdma.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:22:00.583 14:56:34 nvmf_rdma.nvmf_failover -- host/failover.sh@65 -- # count=3 00:22:00.583 14:56:34 nvmf_rdma.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:22:00.583 14:56:34 nvmf_rdma.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2922277 00:22:00.583 14:56:34 nvmf_rdma.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2922277 /var/tmp/bdevperf.sock 00:22:00.583 14:56:34 nvmf_rdma.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:22:00.583 14:56:34 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 2922277 ']' 00:22:00.583 14:56:34 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:00.583 14:56:34 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:00.583 14:56:34 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:00.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:00.583 14:56:34 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:00.583 14:56:34 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:01.148 14:56:35 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:01.148 14:56:35 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:22:01.148 14:56:35 nvmf_rdma.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:22:01.406 [2024-07-15 14:56:35.182329] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:22:01.406 14:56:35 nvmf_rdma.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:22:01.663 [2024-07-15 14:56:35.374988] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:22:01.663 14:56:35 nvmf_rdma.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:01.921 NVMe0n1 00:22:01.921 14:56:35 nvmf_rdma.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:02.178 00:22:02.178 14:56:35 nvmf_rdma.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:02.435 00:22:02.435 14:56:36 nvmf_rdma.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:02.435 14:56:36 nvmf_rdma.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:22:02.435 14:56:36 nvmf_rdma.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:02.691 14:56:36 nvmf_rdma.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:22:05.961 14:56:39 nvmf_rdma.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:05.961 14:56:39 nvmf_rdma.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:22:05.961 14:56:39 nvmf_rdma.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2923201 00:22:05.961 14:56:39 nvmf_rdma.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:05.961 14:56:39 nvmf_rdma.nvmf_failover -- host/failover.sh@92 -- # wait 2923201 00:22:06.894 0 00:22:06.894 14:56:40 nvmf_rdma.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:06.894 [2024-07-15 14:56:34.233209] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:22:06.894 [2024-07-15 14:56:34.233259] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2922277 ] 00:22:06.894 EAL: No free 2048 kB hugepages reported on node 1 00:22:06.894 [2024-07-15 14:56:34.288819] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:06.894 [2024-07-15 14:56:34.358546] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:06.894 [2024-07-15 14:56:36.470008] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:22:06.894 [2024-07-15 14:56:36.470643] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:06.894 [2024-07-15 14:56:36.470672] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:06.894 [2024-07-15 14:56:36.495203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:22:06.894 [2024-07-15 14:56:36.511108] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:06.894 Running I/O for 1 seconds... 00:22:06.894 00:22:06.894 Latency(us) 00:22:06.894 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:06.894 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:06.894 Verification LBA range: start 0x0 length 0x4000 00:22:06.894 NVMe0n1 : 1.01 17808.68 69.57 0.00 0.00 7147.72 2574.63 9424.70 00:22:06.894 =================================================================================================================== 00:22:06.894 Total : 17808.68 69.57 0.00 0.00 7147.72 2574.63 9424.70 00:22:06.894 14:56:40 nvmf_rdma.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:06.894 14:56:40 nvmf_rdma.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:22:07.151 14:56:41 nvmf_rdma.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:07.408 14:56:41 nvmf_rdma.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:22:07.408 14:56:41 nvmf_rdma.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:07.666 14:56:41 nvmf_rdma.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:07.666 14:56:41 nvmf_rdma.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:22:10.943 14:56:44 nvmf_rdma.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:10.943 14:56:44 nvmf_rdma.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:22:10.943 14:56:44 nvmf_rdma.nvmf_failover -- host/failover.sh@108 -- # killprocess 2922277 00:22:10.943 14:56:44 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 2922277 ']' 00:22:10.943 14:56:44 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 2922277 00:22:10.943 14:56:44 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:22:10.943 14:56:44 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:10.943 14:56:44 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2922277 00:22:10.943 14:56:44 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:10.943 14:56:44 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:10.943 14:56:44 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2922277' 00:22:10.943 killing process with pid 2922277 00:22:10.943 14:56:44 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@967 -- # kill 2922277 00:22:10.943 14:56:44 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@972 -- # wait 2922277 00:22:11.201 14:56:44 nvmf_rdma.nvmf_failover -- host/failover.sh@110 -- # sync 00:22:11.201 14:56:44 nvmf_rdma.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:11.461 14:56:45 nvmf_rdma.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:22:11.461 14:56:45 nvmf_rdma.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:11.461 14:56:45 nvmf_rdma.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:22:11.461 14:56:45 nvmf_rdma.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:11.461 14:56:45 nvmf_rdma.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:22:11.461 14:56:45 nvmf_rdma.nvmf_failover -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:22:11.461 14:56:45 nvmf_rdma.nvmf_failover -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:22:11.461 14:56:45 nvmf_rdma.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:22:11.461 14:56:45 nvmf_rdma.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:11.461 14:56:45 nvmf_rdma.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:22:11.461 rmmod nvme_rdma 00:22:11.461 rmmod nvme_fabrics 00:22:11.461 14:56:45 nvmf_rdma.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:11.461 14:56:45 nvmf_rdma.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:22:11.461 14:56:45 nvmf_rdma.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:22:11.461 14:56:45 nvmf_rdma.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 2919033 ']' 00:22:11.461 14:56:45 nvmf_rdma.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 2919033 00:22:11.461 14:56:45 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 2919033 ']' 00:22:11.461 14:56:45 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 2919033 00:22:11.461 14:56:45 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:22:11.461 14:56:45 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:11.461 14:56:45 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2919033 00:22:11.461 14:56:45 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:11.461 14:56:45 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:11.461 14:56:45 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2919033' 00:22:11.461 killing process with pid 2919033 00:22:11.461 14:56:45 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@967 -- # kill 2919033 00:22:11.461 14:56:45 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@972 -- # wait 2919033 00:22:11.719 14:56:45 nvmf_rdma.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:11.719 14:56:45 nvmf_rdma.nvmf_failover -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:22:11.719 00:22:11.719 real 0m35.784s 00:22:11.719 user 2m3.500s 00:22:11.719 sys 0m5.875s 00:22:11.719 14:56:45 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:11.719 14:56:45 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:11.719 ************************************ 00:22:11.719 END TEST nvmf_failover 00:22:11.719 ************************************ 00:22:11.719 14:56:45 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:22:11.719 14:56:45 nvmf_rdma -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:22:11.719 14:56:45 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:11.719 14:56:45 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:11.719 14:56:45 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:22:11.719 ************************************ 00:22:11.719 START TEST nvmf_host_discovery 00:22:11.719 ************************************ 00:22:11.719 14:56:45 nvmf_rdma.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:22:11.978 * Looking for test storage... 00:22:11.978 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:22:11.978 14:56:45 nvmf_rdma.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:11.978 14:56:45 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:22:11.978 14:56:45 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:11.978 14:56:45 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:11.978 14:56:45 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:11.978 14:56:45 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:11.978 14:56:45 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:11.978 14:56:45 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:11.978 14:56:45 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:11.978 14:56:45 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:11.978 14:56:45 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:11.978 14:56:45 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:11.978 14:56:45 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:22:11.978 14:56:45 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:22:11.978 14:56:45 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:11.978 14:56:45 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:11.978 14:56:45 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:11.978 14:56:45 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:11.978 14:56:45 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:11.978 14:56:45 nvmf_rdma.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:11.978 14:56:45 nvmf_rdma.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:11.978 14:56:45 nvmf_rdma.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:11.978 14:56:45 nvmf_rdma.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.978 14:56:45 nvmf_rdma.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.978 14:56:45 nvmf_rdma.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.978 14:56:45 nvmf_rdma.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:22:11.978 14:56:45 nvmf_rdma.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.978 14:56:45 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:22:11.978 14:56:45 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:11.978 14:56:45 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:11.978 14:56:45 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:11.978 14:56:45 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:11.978 14:56:45 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:11.978 14:56:45 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:11.978 14:56:45 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:11.978 14:56:45 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:11.978 14:56:45 nvmf_rdma.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' rdma == rdma ']' 00:22:11.978 14:56:45 nvmf_rdma.nvmf_host_discovery -- host/discovery.sh@12 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:22:11.978 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:22:11.978 14:56:45 nvmf_rdma.nvmf_host_discovery -- host/discovery.sh@13 -- # exit 0 00:22:11.978 00:22:11.978 real 0m0.119s 00:22:11.978 user 0m0.055s 00:22:11.978 sys 0m0.072s 00:22:11.978 14:56:45 nvmf_rdma.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:11.978 14:56:45 nvmf_rdma.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:11.978 ************************************ 00:22:11.978 END TEST nvmf_host_discovery 00:22:11.978 ************************************ 00:22:11.978 14:56:45 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:22:11.978 14:56:45 nvmf_rdma -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:22:11.978 14:56:45 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:11.978 14:56:45 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:11.978 14:56:45 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:22:11.978 ************************************ 00:22:11.978 START TEST nvmf_host_multipath_status 00:22:11.978 ************************************ 00:22:11.978 14:56:45 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:22:11.978 * Looking for test storage... 00:22:11.978 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:22:11.978 14:56:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:11.978 14:56:45 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:22:11.978 14:56:45 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:11.978 14:56:45 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:11.978 14:56:45 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:11.978 14:56:45 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:11.978 14:56:45 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:11.978 14:56:45 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:11.978 14:56:45 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:11.978 14:56:45 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:11.978 14:56:45 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:11.978 14:56:45 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:11.978 14:56:45 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:22:11.978 14:56:45 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:22:11.978 14:56:45 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:11.978 14:56:45 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:11.978 14:56:45 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:11.978 14:56:45 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:11.978 14:56:45 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:11.978 14:56:45 nvmf_rdma.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:11.978 14:56:45 nvmf_rdma.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:11.978 14:56:45 nvmf_rdma.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:11.978 14:56:45 nvmf_rdma.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.978 14:56:45 nvmf_rdma.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.978 14:56:45 nvmf_rdma.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.978 14:56:45 nvmf_rdma.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:22:11.978 14:56:45 nvmf_rdma.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.978 14:56:45 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:22:11.978 14:56:45 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:11.978 14:56:45 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:11.979 14:56:45 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:11.979 14:56:45 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:11.979 14:56:45 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:11.979 14:56:45 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:11.979 14:56:45 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:11.979 14:56:45 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:11.979 14:56:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:11.979 14:56:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:11.979 14:56:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:22:11.979 14:56:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/bpftrace.sh 00:22:11.979 14:56:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:11.979 14:56:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:22:11.979 14:56:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:22:11.979 14:56:45 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:22:11.979 14:56:45 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:11.979 14:56:45 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:11.979 14:56:45 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:11.979 14:56:45 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:11.979 14:56:45 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:11.979 14:56:45 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:12.236 14:56:45 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:12.236 14:56:45 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:12.236 14:56:45 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:12.236 14:56:45 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:22:12.236 14:56:45 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:22:17.503 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:22:17.503 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:22:17.503 Found net devices under 0000:da:00.0: mlx_0_0 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:22:17.503 Found net devices under 0000:da:00.1: mlx_0_1 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # rdma_device_init 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # uname 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@62 -- # modprobe ib_cm 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@63 -- # modprobe ib_core 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@64 -- # modprobe ib_umad 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@66 -- # modprobe iw_cm 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # allocate_nic_ips 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@73 -- # get_rdma_if_list 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # continue 2 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # continue 2 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:22:17.503 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:22:17.503 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:17.503 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:22:17.503 altname enp218s0f0np0 00:22:17.503 altname ens818f0np0 00:22:17.504 inet 192.168.100.8/24 scope global mlx_0_0 00:22:17.504 valid_lft forever preferred_lft forever 00:22:17.504 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:17.504 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:22:17.504 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:17.504 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:17.504 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:17.504 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:17.504 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:22:17.504 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:22:17.504 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:22:17.504 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:17.504 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:22:17.504 altname enp218s0f1np1 00:22:17.504 altname ens818f1np1 00:22:17.504 inet 192.168.100.9/24 scope global mlx_0_1 00:22:17.504 valid_lft forever preferred_lft forever 00:22:17.504 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:22:17.504 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:17.504 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:17.504 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:22:17.504 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:22:17.504 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@86 -- # get_rdma_if_list 00:22:17.504 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:17.504 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:17.504 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:17.504 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:17.504 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:17.504 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:17.504 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:17.504 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:17.504 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:17.504 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # continue 2 00:22:17.504 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:17.504 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:17.504 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:17.504 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:17.504 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:17.504 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:17.504 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # continue 2 00:22:17.504 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:17.504 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:22:17.504 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:17.504 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:17.504 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:17.504 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:17.504 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:17.504 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:22:17.504 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:17.504 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:17.504 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:17.504 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:17.504 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:22:17.504 192.168.100.9' 00:22:17.504 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:22:17.504 192.168.100.9' 00:22:17.504 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@457 -- # head -n 1 00:22:17.504 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:17.504 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:22:17.504 192.168.100.9' 00:22:17.504 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # tail -n +2 00:22:17.504 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # head -n 1 00:22:17.504 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:17.504 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:22:17.504 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:17.504 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:22:17.504 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:22:17.504 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:22:17.504 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:22:17.504 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:17.504 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:17.504 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:17.504 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=2927204 00:22:17.504 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:22:17.504 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 2927204 00:22:17.504 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 2927204 ']' 00:22:17.504 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:17.504 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:17.504 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:17.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:17.504 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:17.504 14:56:50 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:17.504 [2024-07-15 14:56:51.020639] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:22:17.504 [2024-07-15 14:56:51.020694] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:17.504 EAL: No free 2048 kB hugepages reported on node 1 00:22:17.504 [2024-07-15 14:56:51.078794] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:17.504 [2024-07-15 14:56:51.153389] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:17.504 [2024-07-15 14:56:51.153429] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:17.504 [2024-07-15 14:56:51.153436] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:17.504 [2024-07-15 14:56:51.153441] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:17.504 [2024-07-15 14:56:51.153446] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:17.504 [2024-07-15 14:56:51.153514] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:17.504 [2024-07-15 14:56:51.153516] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:18.071 14:56:51 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:18.071 14:56:51 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:22:18.071 14:56:51 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:18.071 14:56:51 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:18.071 14:56:51 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:18.071 14:56:51 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:18.071 14:56:51 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2927204 00:22:18.071 14:56:51 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:22:18.330 [2024-07-15 14:56:52.041046] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xdbb3c0/0xdbf8b0) succeed. 00:22:18.330 [2024-07-15 14:56:52.049957] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xdbc870/0xe00f40) succeed. 00:22:18.330 14:56:52 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:18.589 Malloc0 00:22:18.589 14:56:52 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:22:18.589 14:56:52 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:18.847 14:56:52 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:22:19.105 [2024-07-15 14:56:52.814484] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:19.105 14:56:52 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:22:19.105 [2024-07-15 14:56:52.990894] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:22:19.105 14:56:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:22:19.105 14:56:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2927494 00:22:19.105 14:56:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:19.105 14:56:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2927494 /var/tmp/bdevperf.sock 00:22:19.105 14:56:53 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 2927494 ']' 00:22:19.105 14:56:53 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:19.105 14:56:53 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:19.105 14:56:53 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:19.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:19.105 14:56:53 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:19.105 14:56:53 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:20.039 14:56:53 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:20.039 14:56:53 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:22:20.039 14:56:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:20.296 14:56:54 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:22:20.555 Nvme0n1 00:22:20.555 14:56:54 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:22:20.814 Nvme0n1 00:22:20.814 14:56:54 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:22:20.814 14:56:54 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:22:22.712 14:56:56 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:22:22.712 14:56:56 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:22:22.977 14:56:56 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:22:22.977 14:56:56 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:22:24.348 14:56:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:22:24.348 14:56:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:24.348 14:56:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:24.348 14:56:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:24.348 14:56:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:24.348 14:56:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:24.348 14:56:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:24.348 14:56:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:24.348 14:56:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:24.348 14:56:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:24.348 14:56:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:24.348 14:56:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:24.607 14:56:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:24.607 14:56:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:24.607 14:56:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:24.607 14:56:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:24.866 14:56:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:24.866 14:56:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:24.866 14:56:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:24.867 14:56:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:24.867 14:56:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:24.867 14:56:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:24.867 14:56:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:24.867 14:56:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:25.125 14:56:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:25.125 14:56:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:22:25.125 14:56:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:22:25.384 14:56:59 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:22:25.384 14:56:59 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:22:26.849 14:57:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:22:26.849 14:57:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:26.849 14:57:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:26.849 14:57:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:26.849 14:57:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:26.849 14:57:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:26.849 14:57:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:26.849 14:57:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:26.849 14:57:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:26.849 14:57:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:26.849 14:57:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:26.849 14:57:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:27.120 14:57:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:27.120 14:57:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:27.120 14:57:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:27.120 14:57:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:27.120 14:57:01 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:27.120 14:57:01 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:27.120 14:57:01 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:27.120 14:57:01 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:27.389 14:57:01 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:27.389 14:57:01 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:27.389 14:57:01 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:27.389 14:57:01 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:27.648 14:57:01 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:27.648 14:57:01 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:22:27.648 14:57:01 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:22:27.648 14:57:01 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:22:27.907 14:57:01 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:22:28.843 14:57:02 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:22:28.843 14:57:02 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:28.843 14:57:02 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:28.843 14:57:02 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:29.102 14:57:02 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:29.102 14:57:02 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:29.102 14:57:02 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:29.102 14:57:02 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:29.360 14:57:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:29.360 14:57:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:29.360 14:57:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:29.360 14:57:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:29.617 14:57:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:29.617 14:57:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:29.617 14:57:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:29.617 14:57:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:29.617 14:57:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:29.617 14:57:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:29.617 14:57:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:29.617 14:57:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:29.874 14:57:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:29.874 14:57:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:29.874 14:57:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:29.874 14:57:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:30.132 14:57:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:30.132 14:57:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:22:30.132 14:57:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:22:30.132 14:57:04 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:22:30.392 14:57:04 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:22:31.421 14:57:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:22:31.421 14:57:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:31.421 14:57:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:31.421 14:57:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:31.679 14:57:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:31.679 14:57:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:31.679 14:57:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:31.679 14:57:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:31.680 14:57:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:31.680 14:57:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:31.680 14:57:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:31.680 14:57:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:31.938 14:57:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:31.938 14:57:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:31.938 14:57:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:31.938 14:57:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:32.197 14:57:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:32.197 14:57:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:32.197 14:57:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:32.197 14:57:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:32.197 14:57:06 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:32.197 14:57:06 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:32.197 14:57:06 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:32.197 14:57:06 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:32.456 14:57:06 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:32.456 14:57:06 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:22:32.456 14:57:06 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:22:32.715 14:57:06 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:22:32.716 14:57:06 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:22:34.093 14:57:07 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:22:34.093 14:57:07 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:34.093 14:57:07 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:34.093 14:57:07 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:34.093 14:57:07 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:34.093 14:57:07 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:34.093 14:57:07 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:34.093 14:57:07 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:34.093 14:57:07 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:34.093 14:57:07 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:34.093 14:57:07 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:34.093 14:57:07 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:34.351 14:57:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:34.351 14:57:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:34.351 14:57:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:34.351 14:57:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:34.610 14:57:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:34.610 14:57:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:22:34.610 14:57:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:34.610 14:57:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:34.610 14:57:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:34.610 14:57:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:34.610 14:57:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:34.610 14:57:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:34.869 14:57:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:34.869 14:57:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:22:34.869 14:57:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:22:35.128 14:57:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:22:35.128 14:57:09 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:22:36.506 14:57:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:22:36.506 14:57:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:36.506 14:57:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:36.506 14:57:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:36.506 14:57:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:36.506 14:57:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:36.506 14:57:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:36.506 14:57:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:36.506 14:57:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:36.506 14:57:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:36.506 14:57:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:36.506 14:57:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:36.765 14:57:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:36.765 14:57:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:36.765 14:57:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:36.765 14:57:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:37.024 14:57:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:37.024 14:57:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:22:37.024 14:57:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:37.024 14:57:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:37.282 14:57:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:37.282 14:57:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:37.282 14:57:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:37.282 14:57:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:37.282 14:57:11 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:37.282 14:57:11 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:22:37.541 14:57:11 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:22:37.541 14:57:11 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:22:37.800 14:57:11 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:22:37.800 14:57:11 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:22:38.737 14:57:12 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:22:38.737 14:57:12 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:38.737 14:57:12 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:38.737 14:57:12 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:38.995 14:57:12 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:38.995 14:57:12 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:38.995 14:57:12 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:38.995 14:57:12 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:39.253 14:57:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:39.253 14:57:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:39.253 14:57:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:39.253 14:57:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:39.511 14:57:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:39.511 14:57:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:39.511 14:57:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:39.511 14:57:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:39.511 14:57:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:39.511 14:57:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:39.511 14:57:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:39.511 14:57:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:39.769 14:57:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:39.769 14:57:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:39.769 14:57:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:39.769 14:57:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:40.028 14:57:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:40.028 14:57:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:22:40.028 14:57:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:22:40.028 14:57:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:22:40.287 14:57:14 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:22:41.222 14:57:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:22:41.222 14:57:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:41.222 14:57:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:41.222 14:57:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:41.480 14:57:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:41.480 14:57:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:41.480 14:57:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:41.480 14:57:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:41.480 14:57:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:41.480 14:57:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:41.480 14:57:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:41.480 14:57:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:41.739 14:57:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:41.739 14:57:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:41.739 14:57:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:41.739 14:57:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:41.998 14:57:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:41.998 14:57:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:41.998 14:57:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:41.998 14:57:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:41.998 14:57:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:41.998 14:57:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:41.998 14:57:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:41.998 14:57:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:42.257 14:57:16 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:42.257 14:57:16 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:22:42.257 14:57:16 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:22:42.515 14:57:16 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:22:42.774 14:57:16 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:22:43.708 14:57:17 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:22:43.709 14:57:17 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:43.709 14:57:17 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:43.709 14:57:17 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:43.709 14:57:17 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:43.709 14:57:17 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:43.709 14:57:17 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:43.709 14:57:17 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:43.967 14:57:17 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:43.967 14:57:17 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:43.967 14:57:17 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:43.967 14:57:17 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:44.226 14:57:17 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:44.226 14:57:17 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:44.226 14:57:17 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:44.226 14:57:17 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:44.484 14:57:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:44.484 14:57:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:44.484 14:57:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:44.484 14:57:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:44.484 14:57:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:44.484 14:57:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:44.484 14:57:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:44.484 14:57:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:44.742 14:57:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:44.742 14:57:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:22:44.742 14:57:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:22:44.999 14:57:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:22:44.999 14:57:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:22:45.934 14:57:19 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:22:45.934 14:57:19 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:45.934 14:57:19 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:45.934 14:57:19 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:46.191 14:57:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:46.191 14:57:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:46.191 14:57:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:46.191 14:57:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:46.448 14:57:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:46.448 14:57:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:46.448 14:57:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:46.448 14:57:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:46.705 14:57:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:46.705 14:57:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:46.705 14:57:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:46.705 14:57:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:46.705 14:57:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:46.705 14:57:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:46.705 14:57:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:46.705 14:57:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:46.963 14:57:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:46.963 14:57:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:46.963 14:57:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:46.963 14:57:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:47.220 14:57:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:47.220 14:57:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2927494 00:22:47.220 14:57:20 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 2927494 ']' 00:22:47.220 14:57:20 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 2927494 00:22:47.220 14:57:20 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:22:47.220 14:57:20 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:47.220 14:57:20 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2927494 00:22:47.220 14:57:20 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:47.220 14:57:20 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:47.220 14:57:20 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2927494' 00:22:47.220 killing process with pid 2927494 00:22:47.220 14:57:20 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 2927494 00:22:47.220 14:57:20 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 2927494 00:22:47.220 Connection closed with partial response: 00:22:47.220 00:22:47.220 00:22:47.481 14:57:21 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2927494 00:22:47.481 14:57:21 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:47.481 [2024-07-15 14:56:53.051626] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:22:47.481 [2024-07-15 14:56:53.051677] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2927494 ] 00:22:47.481 EAL: No free 2048 kB hugepages reported on node 1 00:22:47.481 [2024-07-15 14:56:53.102079] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:47.481 [2024-07-15 14:56:53.180877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:47.481 Running I/O for 90 seconds... 00:22:47.481 [2024-07-15 14:57:06.394673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:18488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.481 [2024-07-15 14:57:06.394711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:47.481 [2024-07-15 14:57:06.394748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:18496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.481 [2024-07-15 14:57:06.394757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:47.481 [2024-07-15 14:57:06.394768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:18504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.481 [2024-07-15 14:57:06.394775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:47.481 [2024-07-15 14:57:06.394784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:18512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.481 [2024-07-15 14:57:06.394790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:47.481 [2024-07-15 14:57:06.394799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:18520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.481 [2024-07-15 14:57:06.394806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:47.481 [2024-07-15 14:57:06.394815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.481 [2024-07-15 14:57:06.394821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:47.481 [2024-07-15 14:57:06.394830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.481 [2024-07-15 14:57:06.394836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:47.481 [2024-07-15 14:57:06.394845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:18544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.481 [2024-07-15 14:57:06.394851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:47.481 [2024-07-15 14:57:06.394860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.481 [2024-07-15 14:57:06.394867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:47.481 [2024-07-15 14:57:06.394876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:18560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.481 [2024-07-15 14:57:06.394882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:47.481 [2024-07-15 14:57:06.394891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:18568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.481 [2024-07-15 14:57:06.394903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:47.481 [2024-07-15 14:57:06.394912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:18576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.481 [2024-07-15 14:57:06.394918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:47.481 [2024-07-15 14:57:06.394927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:18584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.481 [2024-07-15 14:57:06.394934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:47.481 [2024-07-15 14:57:06.394942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.481 [2024-07-15 14:57:06.394949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:47.481 [2024-07-15 14:57:06.394958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:18600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.481 [2024-07-15 14:57:06.394965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:47.481 [2024-07-15 14:57:06.394973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:18608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.481 [2024-07-15 14:57:06.394980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:47.481 [2024-07-15 14:57:06.394988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:18616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.481 [2024-07-15 14:57:06.394995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:47.481 [2024-07-15 14:57:06.395005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:18624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.481 [2024-07-15 14:57:06.395012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:47.481 [2024-07-15 14:57:06.395021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:18632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.481 [2024-07-15 14:57:06.395027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:47.481 [2024-07-15 14:57:06.395036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:18640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.481 [2024-07-15 14:57:06.395042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:47.481 [2024-07-15 14:57:06.395051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:18648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.481 [2024-07-15 14:57:06.395058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:47.481 [2024-07-15 14:57:06.395067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.481 [2024-07-15 14:57:06.395073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:47.481 [2024-07-15 14:57:06.395082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:18664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.482 [2024-07-15 14:57:06.395089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:47.482 [2024-07-15 14:57:06.395100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:18672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.482 [2024-07-15 14:57:06.395107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:47.482 [2024-07-15 14:57:06.395115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:18680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.482 [2024-07-15 14:57:06.395122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:47.482 [2024-07-15 14:57:06.395131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:18688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.482 [2024-07-15 14:57:06.395137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:47.482 [2024-07-15 14:57:06.395146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:18696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.482 [2024-07-15 14:57:06.395152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:47.482 [2024-07-15 14:57:06.395161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:18704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.482 [2024-07-15 14:57:06.395167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:47.482 [2024-07-15 14:57:06.395176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:18712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.482 [2024-07-15 14:57:06.395183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:47.482 [2024-07-15 14:57:06.395191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:18720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.482 [2024-07-15 14:57:06.395198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:47.482 [2024-07-15 14:57:06.395207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:18728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.482 [2024-07-15 14:57:06.395213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:47.482 [2024-07-15 14:57:06.395222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:18736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.482 [2024-07-15 14:57:06.395228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:47.482 [2024-07-15 14:57:06.395237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:18744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.482 [2024-07-15 14:57:06.395244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:47.482 [2024-07-15 14:57:06.395256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.482 [2024-07-15 14:57:06.395262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:47.482 [2024-07-15 14:57:06.395271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:18760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.482 [2024-07-15 14:57:06.395277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:47.482 [2024-07-15 14:57:06.395288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.482 [2024-07-15 14:57:06.395294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:47.482 [2024-07-15 14:57:06.395303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.482 [2024-07-15 14:57:06.395310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:47.482 [2024-07-15 14:57:06.395318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:18784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.482 [2024-07-15 14:57:06.395324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:47.482 [2024-07-15 14:57:06.395333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:18792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.482 [2024-07-15 14:57:06.395340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:47.482 [2024-07-15 14:57:06.395349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:18800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.482 [2024-07-15 14:57:06.395355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:47.482 [2024-07-15 14:57:06.395364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:18808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.482 [2024-07-15 14:57:06.395371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:47.482 [2024-07-15 14:57:06.395379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:18816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.482 [2024-07-15 14:57:06.395386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:47.482 [2024-07-15 14:57:06.395394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:18824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.482 [2024-07-15 14:57:06.395401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:47.482 [2024-07-15 14:57:06.395410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:18832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.482 [2024-07-15 14:57:06.395416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:47.482 [2024-07-15 14:57:06.395425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:18840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.482 [2024-07-15 14:57:06.395431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:47.482 [2024-07-15 14:57:06.395440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:18848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.482 [2024-07-15 14:57:06.395446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:47.482 [2024-07-15 14:57:06.395455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.482 [2024-07-15 14:57:06.395461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:47.482 [2024-07-15 14:57:06.395470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:18864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.482 [2024-07-15 14:57:06.395477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:47.482 [2024-07-15 14:57:06.395486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:18872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.482 [2024-07-15 14:57:06.395492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:47.482 [2024-07-15 14:57:06.395501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.482 [2024-07-15 14:57:06.395507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:47.482 [2024-07-15 14:57:06.395516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:18888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.482 [2024-07-15 14:57:06.395522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:47.482 [2024-07-15 14:57:06.395531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:18896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.482 [2024-07-15 14:57:06.395542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:47.482 [2024-07-15 14:57:06.395552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:18904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.482 [2024-07-15 14:57:06.395558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:47.482 [2024-07-15 14:57:06.395567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:18912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.482 [2024-07-15 14:57:06.395573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:47.482 [2024-07-15 14:57:06.395582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:18920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.482 [2024-07-15 14:57:06.395588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:47.482 [2024-07-15 14:57:06.395597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:18928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.482 [2024-07-15 14:57:06.395604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:47.482 [2024-07-15 14:57:06.395612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.482 [2024-07-15 14:57:06.395618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:47.482 [2024-07-15 14:57:06.395627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:18944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.482 [2024-07-15 14:57:06.395633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:47.482 [2024-07-15 14:57:06.395644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:18952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.482 [2024-07-15 14:57:06.395650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:47.482 [2024-07-15 14:57:06.395659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:18960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.482 [2024-07-15 14:57:06.395667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:47.482 [2024-07-15 14:57:06.395675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:18968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.482 [2024-07-15 14:57:06.395682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:47.482 [2024-07-15 14:57:06.395691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:18184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x184400 00:22:47.482 [2024-07-15 14:57:06.395698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:47.482 [2024-07-15 14:57:06.395707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:18192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x184400 00:22:47.482 [2024-07-15 14:57:06.395714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:47.482 [2024-07-15 14:57:06.395723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:18200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x184400 00:22:47.482 [2024-07-15 14:57:06.395730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:47.483 [2024-07-15 14:57:06.395739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:18208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x184400 00:22:47.483 [2024-07-15 14:57:06.395745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:47.483 [2024-07-15 14:57:06.395754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:18216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x184400 00:22:47.483 [2024-07-15 14:57:06.395761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:47.483 [2024-07-15 14:57:06.395773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:18224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x184400 00:22:47.483 [2024-07-15 14:57:06.395780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:47.483 [2024-07-15 14:57:06.395789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:18232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x184400 00:22:47.483 [2024-07-15 14:57:06.395796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:47.483 [2024-07-15 14:57:06.395805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x184400 00:22:47.483 [2024-07-15 14:57:06.395811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:47.483 [2024-07-15 14:57:06.395820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:18248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x184400 00:22:47.483 [2024-07-15 14:57:06.395827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:47.483 [2024-07-15 14:57:06.395836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:18256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x184400 00:22:47.483 [2024-07-15 14:57:06.395842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:47.483 [2024-07-15 14:57:06.395853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x184400 00:22:47.483 [2024-07-15 14:57:06.395859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:47.483 [2024-07-15 14:57:06.395869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:18272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x184400 00:22:47.483 [2024-07-15 14:57:06.395875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:47.483 [2024-07-15 14:57:06.395885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:18280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x184400 00:22:47.483 [2024-07-15 14:57:06.395891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:47.483 [2024-07-15 14:57:06.395900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:18288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x184400 00:22:47.483 [2024-07-15 14:57:06.395906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:47.483 [2024-07-15 14:57:06.395915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:18296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x184400 00:22:47.483 [2024-07-15 14:57:06.395922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:47.483 [2024-07-15 14:57:06.395930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:18304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x184400 00:22:47.483 [2024-07-15 14:57:06.395937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:47.483 [2024-07-15 14:57:06.395946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:18312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x184400 00:22:47.483 [2024-07-15 14:57:06.395953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:47.483 [2024-07-15 14:57:06.395962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:18320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x184400 00:22:47.483 [2024-07-15 14:57:06.395969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:47.483 [2024-07-15 14:57:06.395978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:18328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x184400 00:22:47.483 [2024-07-15 14:57:06.395984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:47.483 [2024-07-15 14:57:06.395993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:18336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x184400 00:22:47.483 [2024-07-15 14:57:06.395999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:47.483 [2024-07-15 14:57:06.396008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:18344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x184400 00:22:47.483 [2024-07-15 14:57:06.396015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:47.483 [2024-07-15 14:57:06.396025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:18352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x184400 00:22:47.483 [2024-07-15 14:57:06.396032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:47.483 [2024-07-15 14:57:06.396041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:18360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x184400 00:22:47.483 [2024-07-15 14:57:06.396047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:47.483 [2024-07-15 14:57:06.396056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x184400 00:22:47.483 [2024-07-15 14:57:06.396062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:47.483 [2024-07-15 14:57:06.396071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:18376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x184400 00:22:47.483 [2024-07-15 14:57:06.396077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:47.483 [2024-07-15 14:57:06.396086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:18384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x184400 00:22:47.483 [2024-07-15 14:57:06.396093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:47.483 [2024-07-15 14:57:06.396101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:18392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x184400 00:22:47.483 [2024-07-15 14:57:06.396108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:47.483 [2024-07-15 14:57:06.396117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x184400 00:22:47.483 [2024-07-15 14:57:06.396123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:47.483 [2024-07-15 14:57:06.396133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:18408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x184400 00:22:47.483 [2024-07-15 14:57:06.396140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:47.483 [2024-07-15 14:57:06.396148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:18416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x184400 00:22:47.483 [2024-07-15 14:57:06.396156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:47.483 [2024-07-15 14:57:06.396165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x184400 00:22:47.483 [2024-07-15 14:57:06.396172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:47.483 [2024-07-15 14:57:06.396180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:18976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.483 [2024-07-15 14:57:06.396187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:47.483 [2024-07-15 14:57:06.396196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:18984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.483 [2024-07-15 14:57:06.396204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:47.483 [2024-07-15 14:57:06.396212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.483 [2024-07-15 14:57:06.396219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:47.483 [2024-07-15 14:57:06.396227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:19000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.483 [2024-07-15 14:57:06.396234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:47.483 [2024-07-15 14:57:06.396243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:19008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.483 [2024-07-15 14:57:06.396250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:47.483 [2024-07-15 14:57:06.396517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:19016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.483 [2024-07-15 14:57:06.396525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:47.483 [2024-07-15 14:57:06.396939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:19024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.483 [2024-07-15 14:57:06.396947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:47.483 [2024-07-15 14:57:06.396963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:19032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.483 [2024-07-15 14:57:06.396969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:47.483 [2024-07-15 14:57:06.396984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:19040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.483 [2024-07-15 14:57:06.396990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:47.483 [2024-07-15 14:57:06.397005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:19048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.483 [2024-07-15 14:57:06.397012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:47.483 [2024-07-15 14:57:06.397027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.483 [2024-07-15 14:57:06.397033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:47.484 [2024-07-15 14:57:06.397048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.484 [2024-07-15 14:57:06.397054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:47.484 [2024-07-15 14:57:06.397069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:19072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.484 [2024-07-15 14:57:06.397076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:47.484 [2024-07-15 14:57:06.397091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.484 [2024-07-15 14:57:06.397097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:47.484 [2024-07-15 14:57:06.397114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:19088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.484 [2024-07-15 14:57:06.397121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:47.484 [2024-07-15 14:57:06.397136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:19096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.484 [2024-07-15 14:57:06.397142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:47.484 [2024-07-15 14:57:06.397157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:19104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.484 [2024-07-15 14:57:06.397163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:47.484 [2024-07-15 14:57:06.397178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:19112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.484 [2024-07-15 14:57:06.397184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:47.484 [2024-07-15 14:57:06.397199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:19120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.484 [2024-07-15 14:57:06.397205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:47.484 [2024-07-15 14:57:06.397220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:19128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.484 [2024-07-15 14:57:06.397226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:47.484 [2024-07-15 14:57:06.397241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:19136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.484 [2024-07-15 14:57:06.397248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:47.484 [2024-07-15 14:57:06.397262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:19144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.484 [2024-07-15 14:57:06.397269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:47.484 [2024-07-15 14:57:06.397283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:19152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.484 [2024-07-15 14:57:06.397290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:47.484 [2024-07-15 14:57:06.397305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:19160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.484 [2024-07-15 14:57:06.397311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:47.484 [2024-07-15 14:57:06.397326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:19168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.484 [2024-07-15 14:57:06.397332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:47.484 [2024-07-15 14:57:06.397347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:19176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.484 [2024-07-15 14:57:06.397354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:47.484 [2024-07-15 14:57:06.397372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:19184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.484 [2024-07-15 14:57:06.397379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:47.484 [2024-07-15 14:57:06.397393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.484 [2024-07-15 14:57:06.397400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:47.484 [2024-07-15 14:57:06.397415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:19200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.484 [2024-07-15 14:57:06.397422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:47.484 [2024-07-15 14:57:06.397437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:18432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x184400 00:22:47.484 [2024-07-15 14:57:06.397444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:47.484 [2024-07-15 14:57:06.397459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x184400 00:22:47.484 [2024-07-15 14:57:06.397466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:47.484 [2024-07-15 14:57:06.397481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:18448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x184400 00:22:47.484 [2024-07-15 14:57:06.397487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:47.484 [2024-07-15 14:57:06.397502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:18456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x184400 00:22:47.484 [2024-07-15 14:57:06.397509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:47.484 [2024-07-15 14:57:06.397524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x184400 00:22:47.484 [2024-07-15 14:57:06.397530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:47.484 [2024-07-15 14:57:06.397549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:18472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x184400 00:22:47.484 [2024-07-15 14:57:06.397556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:47.484 [2024-07-15 14:57:06.397571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:18480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x184400 00:22:47.484 [2024-07-15 14:57:06.397577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:47.484 [2024-07-15 14:57:18.825487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:104336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x184400 00:22:47.484 [2024-07-15 14:57:18.825526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:47.484 [2024-07-15 14:57:18.826142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:104880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.484 [2024-07-15 14:57:18.826157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:47.484 [2024-07-15 14:57:18.826168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:104888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.484 [2024-07-15 14:57:18.826176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:47.484 [2024-07-15 14:57:18.826185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:104904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.484 [2024-07-15 14:57:18.826192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:47.484 [2024-07-15 14:57:18.826201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:104912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.484 [2024-07-15 14:57:18.826207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:47.484 [2024-07-15 14:57:18.826216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:104920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.484 [2024-07-15 14:57:18.826222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:47.484 [2024-07-15 14:57:18.826230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:104936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.484 [2024-07-15 14:57:18.826237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:47.484 [2024-07-15 14:57:18.826245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:104944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.484 [2024-07-15 14:57:18.826252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:47.484 [2024-07-15 14:57:18.826262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:104448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x184400 00:22:47.484 [2024-07-15 14:57:18.826268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:47.484 [2024-07-15 14:57:18.826278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:104968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.484 [2024-07-15 14:57:18.826284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:47.484 [2024-07-15 14:57:18.826293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:104480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x184400 00:22:47.484 [2024-07-15 14:57:18.826301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:47.484 [2024-07-15 14:57:18.826309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:104984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.484 [2024-07-15 14:57:18.826316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:47.484 [2024-07-15 14:57:18.826325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:104520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x184400 00:22:47.484 [2024-07-15 14:57:18.826332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:47.484 [2024-07-15 14:57:18.826341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:104536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x184400 00:22:47.484 [2024-07-15 14:57:18.826349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:47.484 [2024-07-15 14:57:18.826358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:104552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a6000 len:0x1000 key:0x184400 00:22:47.484 [2024-07-15 14:57:18.826365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:47.485 [2024-07-15 14:57:18.826374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:105016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.485 [2024-07-15 14:57:18.826381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:47.485 [2024-07-15 14:57:18.826390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:104360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x184400 00:22:47.485 [2024-07-15 14:57:18.826396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:47.485 [2024-07-15 14:57:18.826405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:104368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0x184400 00:22:47.485 [2024-07-15 14:57:18.826411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:47.485 [2024-07-15 14:57:18.826420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:104392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x184400 00:22:47.485 [2024-07-15 14:57:18.826428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:47.485 [2024-07-15 14:57:18.826437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:104408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x184400 00:22:47.485 [2024-07-15 14:57:18.826443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:47.485 [2024-07-15 14:57:18.826452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:104424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ba000 len:0x1000 key:0x184400 00:22:47.485 [2024-07-15 14:57:18.826459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:47.485 [2024-07-15 14:57:18.826467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:105048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.485 [2024-07-15 14:57:18.826473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:47.485 [2024-07-15 14:57:18.826482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:104440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x184400 00:22:47.485 [2024-07-15 14:57:18.826489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:47.485 [2024-07-15 14:57:18.826498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:104464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x184400 00:22:47.485 [2024-07-15 14:57:18.826504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:47.485 [2024-07-15 14:57:18.826513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:105064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.485 [2024-07-15 14:57:18.826519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:47.485 [2024-07-15 14:57:18.826529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:105072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.485 [2024-07-15 14:57:18.826535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:47.485 [2024-07-15 14:57:18.826552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:104512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x184400 00:22:47.485 [2024-07-15 14:57:18.826559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:47.485 [2024-07-15 14:57:18.826569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:104528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x184400 00:22:47.485 [2024-07-15 14:57:18.826575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:47.485 [2024-07-15 14:57:18.826585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:105088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.485 [2024-07-15 14:57:18.826591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:47.485 [2024-07-15 14:57:18.826601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:104568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x184400 00:22:47.485 [2024-07-15 14:57:18.826607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:47.485 [2024-07-15 14:57:18.826616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:104584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x184400 00:22:47.485 [2024-07-15 14:57:18.826623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:47.485 [2024-07-15 14:57:18.826632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:105120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.485 [2024-07-15 14:57:18.826639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:47.485 [2024-07-15 14:57:18.826648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:104608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a2000 len:0x1000 key:0x184400 00:22:47.485 [2024-07-15 14:57:18.826655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:47.485 [2024-07-15 14:57:18.826859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:105136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.485 [2024-07-15 14:57:18.826868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:47.485 [2024-07-15 14:57:18.826879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:105152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.485 [2024-07-15 14:57:18.826885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:47.485 [2024-07-15 14:57:18.826894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:104640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x184400 00:22:47.485 [2024-07-15 14:57:18.826901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:47.485 [2024-07-15 14:57:18.826910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:105168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.485 [2024-07-15 14:57:18.826918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:47.485 [2024-07-15 14:57:18.826927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:104688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x184400 00:22:47.485 [2024-07-15 14:57:18.826934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:47.485 [2024-07-15 14:57:18.826943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:104712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x184400 00:22:47.485 [2024-07-15 14:57:18.826950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:47.485 [2024-07-15 14:57:18.826958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:104728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x184400 00:22:47.485 [2024-07-15 14:57:18.826965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:47.485 [2024-07-15 14:57:18.826974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:105200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.485 [2024-07-15 14:57:18.826981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:47.485 [2024-07-15 14:57:18.826990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:105208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.485 [2024-07-15 14:57:18.826996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:47.485 [2024-07-15 14:57:18.827005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:105216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.485 [2024-07-15 14:57:18.827012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:47.485 [2024-07-15 14:57:18.827021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:104808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x184400 00:22:47.485 [2024-07-15 14:57:18.827028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:47.485 [2024-07-15 14:57:18.827037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:105232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.485 [2024-07-15 14:57:18.827044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:47.485 [2024-07-15 14:57:18.827055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:105240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.485 [2024-07-15 14:57:18.827062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:47.485 [2024-07-15 14:57:18.827071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:104592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x184400 00:22:47.485 [2024-07-15 14:57:18.827077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:47.485 [2024-07-15 14:57:18.827086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:104600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x184400 00:22:47.485 [2024-07-15 14:57:18.827093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:47.485 [2024-07-15 14:57:18.827103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:105272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.486 [2024-07-15 14:57:18.827111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:47.486 [2024-07-15 14:57:18.827121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:104632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ac000 len:0x1000 key:0x184400 00:22:47.486 [2024-07-15 14:57:18.827127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:47.486 [2024-07-15 14:57:18.827136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:105288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.486 [2024-07-15 14:57:18.827143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:47.486 [2024-07-15 14:57:18.827152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:104648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x184400 00:22:47.486 [2024-07-15 14:57:18.827159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:47.486 [2024-07-15 14:57:18.827168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:104664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x184400 00:22:47.486 [2024-07-15 14:57:18.827175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:47.486 [2024-07-15 14:57:18.827184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:104696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x184400 00:22:47.486 [2024-07-15 14:57:18.827190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:47.486 [2024-07-15 14:57:18.827199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:104720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x184400 00:22:47.486 [2024-07-15 14:57:18.827206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:47.486 [2024-07-15 14:57:18.827215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:105320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.486 [2024-07-15 14:57:18.827221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:47.486 [2024-07-15 14:57:18.827230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:104744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a8000 len:0x1000 key:0x184400 00:22:47.486 [2024-07-15 14:57:18.827237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:47.486 [2024-07-15 14:57:18.827245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.486 [2024-07-15 14:57:18.827252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:47.486 [2024-07-15 14:57:18.827261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:104784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b2000 len:0x1000 key:0x184400 00:22:47.486 [2024-07-15 14:57:18.827267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:47.486 [2024-07-15 14:57:18.827276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:105336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.486 [2024-07-15 14:57:18.827283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:47.486 [2024-07-15 14:57:18.827294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:105344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:47.486 [2024-07-15 14:57:18.827301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:47.486 [2024-07-15 14:57:18.827310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:104848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x184400 00:22:47.486 [2024-07-15 14:57:18.827316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:47.486 [2024-07-15 14:57:18.827325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:104840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b4000 len:0x1000 key:0x184400 00:22:47.486 [2024-07-15 14:57:18.827332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:47.486 [2024-07-15 14:57:18.827341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:104856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x184400 00:22:47.486 [2024-07-15 14:57:18.827347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:47.486 Received shutdown signal, test time was about 26.283330 seconds 00:22:47.486 00:22:47.486 Latency(us) 00:22:47.486 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:47.486 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:47.486 Verification LBA range: start 0x0 length 0x4000 00:22:47.486 Nvme0n1 : 26.28 15706.81 61.35 0.00 0.00 8129.77 73.14 3019898.88 00:22:47.486 =================================================================================================================== 00:22:47.486 Total : 15706.81 61.35 0.00 0.00 8129.77 73.14 3019898.88 00:22:47.486 14:57:21 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:47.486 14:57:21 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:22:47.486 14:57:21 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:47.486 14:57:21 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:22:47.486 14:57:21 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:47.486 14:57:21 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:22:47.486 14:57:21 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:22:47.486 14:57:21 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:22:47.486 14:57:21 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:22:47.486 14:57:21 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:47.486 14:57:21 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:22:47.486 rmmod nvme_rdma 00:22:47.486 rmmod nvme_fabrics 00:22:47.486 14:57:21 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:47.744 14:57:21 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:22:47.744 14:57:21 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:22:47.744 14:57:21 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 2927204 ']' 00:22:47.744 14:57:21 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 2927204 00:22:47.744 14:57:21 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 2927204 ']' 00:22:47.744 14:57:21 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 2927204 00:22:47.744 14:57:21 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:22:47.744 14:57:21 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:47.744 14:57:21 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2927204 00:22:47.744 14:57:21 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:47.744 14:57:21 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:47.744 14:57:21 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2927204' 00:22:47.744 killing process with pid 2927204 00:22:47.744 14:57:21 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 2927204 00:22:47.744 14:57:21 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 2927204 00:22:48.003 14:57:21 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:48.003 14:57:21 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:22:48.003 00:22:48.003 real 0m35.914s 00:22:48.003 user 1m45.645s 00:22:48.003 sys 0m7.174s 00:22:48.003 14:57:21 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:48.003 14:57:21 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:48.003 ************************************ 00:22:48.003 END TEST nvmf_host_multipath_status 00:22:48.003 ************************************ 00:22:48.003 14:57:21 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:22:48.003 14:57:21 nvmf_rdma -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:22:48.003 14:57:21 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:48.003 14:57:21 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:48.003 14:57:21 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:22:48.003 ************************************ 00:22:48.003 START TEST nvmf_discovery_remove_ifc 00:22:48.003 ************************************ 00:22:48.003 14:57:21 nvmf_rdma.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:22:48.003 * Looking for test storage... 00:22:48.003 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:22:48.003 14:57:21 nvmf_rdma.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:48.003 14:57:21 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:22:48.003 14:57:21 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:48.003 14:57:21 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:48.003 14:57:21 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:48.003 14:57:21 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:48.003 14:57:21 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:48.003 14:57:21 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:48.003 14:57:21 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:48.003 14:57:21 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:48.003 14:57:21 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:48.003 14:57:21 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:48.003 14:57:21 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:22:48.003 14:57:21 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:22:48.003 14:57:21 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:48.003 14:57:21 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:48.003 14:57:21 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:48.003 14:57:21 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:48.003 14:57:21 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:48.003 14:57:21 nvmf_rdma.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:48.003 14:57:21 nvmf_rdma.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:48.003 14:57:21 nvmf_rdma.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:48.003 14:57:21 nvmf_rdma.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:48.003 14:57:21 nvmf_rdma.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:48.003 14:57:21 nvmf_rdma.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:48.003 14:57:21 nvmf_rdma.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:22:48.003 14:57:21 nvmf_rdma.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:48.003 14:57:21 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:22:48.003 14:57:21 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:48.003 14:57:21 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:48.003 14:57:21 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:48.003 14:57:21 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:48.003 14:57:21 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:48.003 14:57:21 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:48.003 14:57:21 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:48.003 14:57:21 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:48.003 14:57:21 nvmf_rdma.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' rdma == rdma ']' 00:22:48.003 14:57:21 nvmf_rdma.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@15 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:22:48.003 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:22:48.003 14:57:21 nvmf_rdma.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@16 -- # exit 0 00:22:48.003 00:22:48.003 real 0m0.104s 00:22:48.003 user 0m0.054s 00:22:48.003 sys 0m0.059s 00:22:48.003 14:57:21 nvmf_rdma.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:48.003 14:57:21 nvmf_rdma.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:48.003 ************************************ 00:22:48.003 END TEST nvmf_discovery_remove_ifc 00:22:48.003 ************************************ 00:22:48.003 14:57:21 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:22:48.003 14:57:21 nvmf_rdma -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:22:48.003 14:57:21 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:48.003 14:57:21 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:48.003 14:57:21 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:22:48.262 ************************************ 00:22:48.262 START TEST nvmf_identify_kernel_target 00:22:48.262 ************************************ 00:22:48.262 14:57:21 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:22:48.262 * Looking for test storage... 00:22:48.262 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:22:48.262 14:57:22 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:48.262 14:57:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:22:48.262 14:57:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:48.262 14:57:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:48.262 14:57:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:48.262 14:57:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:48.262 14:57:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:48.262 14:57:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:48.262 14:57:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:48.262 14:57:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:48.262 14:57:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:48.262 14:57:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:48.262 14:57:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:22:48.262 14:57:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:22:48.262 14:57:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:48.263 14:57:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:48.263 14:57:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:48.263 14:57:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:48.263 14:57:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:48.263 14:57:22 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:48.263 14:57:22 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:48.263 14:57:22 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:48.263 14:57:22 nvmf_rdma.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:48.263 14:57:22 nvmf_rdma.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:48.263 14:57:22 nvmf_rdma.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:48.263 14:57:22 nvmf_rdma.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:22:48.263 14:57:22 nvmf_rdma.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:48.263 14:57:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:22:48.263 14:57:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:48.263 14:57:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:48.263 14:57:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:48.263 14:57:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:48.263 14:57:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:48.263 14:57:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:48.263 14:57:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:48.263 14:57:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:48.263 14:57:22 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:22:48.263 14:57:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:22:48.263 14:57:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:48.263 14:57:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:48.263 14:57:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:48.263 14:57:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:48.263 14:57:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:48.263 14:57:22 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:48.263 14:57:22 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:48.263 14:57:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:48.263 14:57:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:48.263 14:57:22 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:22:48.263 14:57:22 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.529 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:53.529 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:22:53.529 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:53.529 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:53.529 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:53.529 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:53.529 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:53.529 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:22:53.529 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:53.529 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:22:53.529 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:22:53.529 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:22:53.529 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:22:53.529 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:22:53.529 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:22:53.529 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:53.529 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:53.529 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:53.529 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:53.529 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:53.529 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:53.529 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:53.529 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:53.529 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:53.529 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:53.529 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:53.529 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:53.529 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:22:53.529 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:22:53.529 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:22:53.529 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:22:53.529 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:22:53.529 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:53.529 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:53.529 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:22:53.529 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:22:53.529 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:53.529 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:53.529 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:53.529 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:53.529 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:53.529 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:53.529 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:53.529 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:22:53.529 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:22:53.529 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:53.529 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:53.529 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:53.529 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:53.529 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:53.529 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:53.529 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:53.529 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:22:53.529 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:53.529 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:53.529 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:53.529 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:53.529 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:53.529 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:22:53.529 Found net devices under 0000:da:00.0: mlx_0_0 00:22:53.529 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:53.529 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:53.529 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:53.529 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:53.529 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:22:53.530 Found net devices under 0000:da:00.1: mlx_0_1 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # rdma_device_init 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # uname 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@62 -- # modprobe ib_cm 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@63 -- # modprobe ib_core 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@64 -- # modprobe ib_umad 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@66 -- # modprobe iw_cm 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # allocate_nic_ips 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@73 -- # get_rdma_if_list 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # continue 2 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # continue 2 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:22:53.530 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:53.530 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:22:53.530 altname enp218s0f0np0 00:22:53.530 altname ens818f0np0 00:22:53.530 inet 192.168.100.8/24 scope global mlx_0_0 00:22:53.530 valid_lft forever preferred_lft forever 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:22:53.530 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:53.530 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:22:53.530 altname enp218s0f1np1 00:22:53.530 altname ens818f1np1 00:22:53.530 inet 192.168.100.9/24 scope global mlx_0_1 00:22:53.530 valid_lft forever preferred_lft forever 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@86 -- # get_rdma_if_list 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # continue 2 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # continue 2 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:22:53.530 192.168.100.9' 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:22:53.530 192.168.100.9' 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@457 -- # head -n 1 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:22:53.530 192.168.100.9' 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # tail -n +2 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # head -n 1 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:22:53.530 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:53.531 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:22:53.531 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:22:53.531 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:22:53.531 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=192.168.100.8 00:22:53.531 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 192.168.100.8 00:22:53.531 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=192.168.100.8 00:22:53.531 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:22:53.531 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:53.531 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:53.531 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:22:53.531 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:22:53.531 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:22:53.531 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:22:53.531 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:22:53.531 14:57:27 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:22:56.059 Waiting for block devices as requested 00:22:56.059 0000:5f:00.0 (8086 0a54): vfio-pci -> nvme 00:22:56.059 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:22:56.316 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:22:56.316 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:22:56.316 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:22:56.316 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:22:56.573 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:22:56.573 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:22:56.573 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:22:56.573 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:22:56.831 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:22:56.831 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:22:56.831 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:22:57.088 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:22:57.088 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:22:57.088 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:22:57.088 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:22:57.346 14:57:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:22:57.346 14:57:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:22:57.346 14:57:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:22:57.346 14:57:31 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:22:57.346 14:57:31 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:22:57.346 14:57:31 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:22:57.346 14:57:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:22:57.346 14:57:31 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:22:57.346 14:57:31 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:22:57.346 No valid GPT data, bailing 00:22:57.346 14:57:31 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:22:57.346 14:57:31 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:22:57.346 14:57:31 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:22:57.346 14:57:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:22:57.346 14:57:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:22:57.346 14:57:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:57.346 14:57:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:57.346 14:57:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:22:57.346 14:57:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:22:57.346 14:57:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:22:57.346 14:57:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:22:57.346 14:57:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:22:57.346 14:57:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 192.168.100.8 00:22:57.346 14:57:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo rdma 00:22:57.346 14:57:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:22:57.346 14:57:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:22:57.346 14:57:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:22:57.346 14:57:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -a 192.168.100.8 -t rdma -s 4420 00:22:57.605 00:22:57.605 Discovery Log Number of Records 2, Generation counter 2 00:22:57.605 =====Discovery Log Entry 0====== 00:22:57.605 trtype: rdma 00:22:57.605 adrfam: ipv4 00:22:57.605 subtype: current discovery subsystem 00:22:57.605 treq: not specified, sq flow control disable supported 00:22:57.605 portid: 1 00:22:57.605 trsvcid: 4420 00:22:57.605 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:22:57.605 traddr: 192.168.100.8 00:22:57.605 eflags: none 00:22:57.605 rdma_prtype: not specified 00:22:57.605 rdma_qptype: connected 00:22:57.605 rdma_cms: rdma-cm 00:22:57.605 rdma_pkey: 0x0000 00:22:57.605 =====Discovery Log Entry 1====== 00:22:57.605 trtype: rdma 00:22:57.605 adrfam: ipv4 00:22:57.605 subtype: nvme subsystem 00:22:57.605 treq: not specified, sq flow control disable supported 00:22:57.605 portid: 1 00:22:57.605 trsvcid: 4420 00:22:57.605 subnqn: nqn.2016-06.io.spdk:testnqn 00:22:57.605 traddr: 192.168.100.8 00:22:57.605 eflags: none 00:22:57.605 rdma_prtype: not specified 00:22:57.605 rdma_qptype: connected 00:22:57.605 rdma_cms: rdma-cm 00:22:57.605 rdma_pkey: 0x0000 00:22:57.605 14:57:31 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 00:22:57.605 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:22:57.605 EAL: No free 2048 kB hugepages reported on node 1 00:22:57.605 ===================================================== 00:22:57.605 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:57.605 ===================================================== 00:22:57.605 Controller Capabilities/Features 00:22:57.605 ================================ 00:22:57.605 Vendor ID: 0000 00:22:57.605 Subsystem Vendor ID: 0000 00:22:57.605 Serial Number: 48d213c9dd848444743b 00:22:57.605 Model Number: Linux 00:22:57.605 Firmware Version: 6.7.0-68 00:22:57.605 Recommended Arb Burst: 0 00:22:57.605 IEEE OUI Identifier: 00 00 00 00:22:57.605 Multi-path I/O 00:22:57.605 May have multiple subsystem ports: No 00:22:57.605 May have multiple controllers: No 00:22:57.605 Associated with SR-IOV VF: No 00:22:57.605 Max Data Transfer Size: Unlimited 00:22:57.605 Max Number of Namespaces: 0 00:22:57.605 Max Number of I/O Queues: 1024 00:22:57.605 NVMe Specification Version (VS): 1.3 00:22:57.605 NVMe Specification Version (Identify): 1.3 00:22:57.605 Maximum Queue Entries: 128 00:22:57.605 Contiguous Queues Required: No 00:22:57.605 Arbitration Mechanisms Supported 00:22:57.605 Weighted Round Robin: Not Supported 00:22:57.605 Vendor Specific: Not Supported 00:22:57.605 Reset Timeout: 7500 ms 00:22:57.605 Doorbell Stride: 4 bytes 00:22:57.605 NVM Subsystem Reset: Not Supported 00:22:57.605 Command Sets Supported 00:22:57.605 NVM Command Set: Supported 00:22:57.605 Boot Partition: Not Supported 00:22:57.605 Memory Page Size Minimum: 4096 bytes 00:22:57.605 Memory Page Size Maximum: 4096 bytes 00:22:57.605 Persistent Memory Region: Not Supported 00:22:57.605 Optional Asynchronous Events Supported 00:22:57.605 Namespace Attribute Notices: Not Supported 00:22:57.605 Firmware Activation Notices: Not Supported 00:22:57.605 ANA Change Notices: Not Supported 00:22:57.605 PLE Aggregate Log Change Notices: Not Supported 00:22:57.605 LBA Status Info Alert Notices: Not Supported 00:22:57.605 EGE Aggregate Log Change Notices: Not Supported 00:22:57.605 Normal NVM Subsystem Shutdown event: Not Supported 00:22:57.605 Zone Descriptor Change Notices: Not Supported 00:22:57.605 Discovery Log Change Notices: Supported 00:22:57.605 Controller Attributes 00:22:57.605 128-bit Host Identifier: Not Supported 00:22:57.605 Non-Operational Permissive Mode: Not Supported 00:22:57.605 NVM Sets: Not Supported 00:22:57.605 Read Recovery Levels: Not Supported 00:22:57.605 Endurance Groups: Not Supported 00:22:57.605 Predictable Latency Mode: Not Supported 00:22:57.605 Traffic Based Keep ALive: Not Supported 00:22:57.605 Namespace Granularity: Not Supported 00:22:57.605 SQ Associations: Not Supported 00:22:57.605 UUID List: Not Supported 00:22:57.605 Multi-Domain Subsystem: Not Supported 00:22:57.605 Fixed Capacity Management: Not Supported 00:22:57.605 Variable Capacity Management: Not Supported 00:22:57.605 Delete Endurance Group: Not Supported 00:22:57.605 Delete NVM Set: Not Supported 00:22:57.605 Extended LBA Formats Supported: Not Supported 00:22:57.605 Flexible Data Placement Supported: Not Supported 00:22:57.605 00:22:57.605 Controller Memory Buffer Support 00:22:57.605 ================================ 00:22:57.605 Supported: No 00:22:57.605 00:22:57.605 Persistent Memory Region Support 00:22:57.605 ================================ 00:22:57.605 Supported: No 00:22:57.605 00:22:57.605 Admin Command Set Attributes 00:22:57.605 ============================ 00:22:57.605 Security Send/Receive: Not Supported 00:22:57.605 Format NVM: Not Supported 00:22:57.605 Firmware Activate/Download: Not Supported 00:22:57.605 Namespace Management: Not Supported 00:22:57.605 Device Self-Test: Not Supported 00:22:57.605 Directives: Not Supported 00:22:57.605 NVMe-MI: Not Supported 00:22:57.605 Virtualization Management: Not Supported 00:22:57.605 Doorbell Buffer Config: Not Supported 00:22:57.605 Get LBA Status Capability: Not Supported 00:22:57.605 Command & Feature Lockdown Capability: Not Supported 00:22:57.605 Abort Command Limit: 1 00:22:57.605 Async Event Request Limit: 1 00:22:57.605 Number of Firmware Slots: N/A 00:22:57.605 Firmware Slot 1 Read-Only: N/A 00:22:57.605 Firmware Activation Without Reset: N/A 00:22:57.605 Multiple Update Detection Support: N/A 00:22:57.605 Firmware Update Granularity: No Information Provided 00:22:57.605 Per-Namespace SMART Log: No 00:22:57.605 Asymmetric Namespace Access Log Page: Not Supported 00:22:57.605 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:57.605 Command Effects Log Page: Not Supported 00:22:57.605 Get Log Page Extended Data: Supported 00:22:57.605 Telemetry Log Pages: Not Supported 00:22:57.605 Persistent Event Log Pages: Not Supported 00:22:57.605 Supported Log Pages Log Page: May Support 00:22:57.605 Commands Supported & Effects Log Page: Not Supported 00:22:57.605 Feature Identifiers & Effects Log Page:May Support 00:22:57.605 NVMe-MI Commands & Effects Log Page: May Support 00:22:57.605 Data Area 4 for Telemetry Log: Not Supported 00:22:57.605 Error Log Page Entries Supported: 1 00:22:57.605 Keep Alive: Not Supported 00:22:57.605 00:22:57.605 NVM Command Set Attributes 00:22:57.605 ========================== 00:22:57.605 Submission Queue Entry Size 00:22:57.605 Max: 1 00:22:57.605 Min: 1 00:22:57.605 Completion Queue Entry Size 00:22:57.605 Max: 1 00:22:57.605 Min: 1 00:22:57.605 Number of Namespaces: 0 00:22:57.605 Compare Command: Not Supported 00:22:57.605 Write Uncorrectable Command: Not Supported 00:22:57.605 Dataset Management Command: Not Supported 00:22:57.605 Write Zeroes Command: Not Supported 00:22:57.605 Set Features Save Field: Not Supported 00:22:57.606 Reservations: Not Supported 00:22:57.606 Timestamp: Not Supported 00:22:57.606 Copy: Not Supported 00:22:57.606 Volatile Write Cache: Not Present 00:22:57.606 Atomic Write Unit (Normal): 1 00:22:57.606 Atomic Write Unit (PFail): 1 00:22:57.606 Atomic Compare & Write Unit: 1 00:22:57.606 Fused Compare & Write: Not Supported 00:22:57.606 Scatter-Gather List 00:22:57.606 SGL Command Set: Supported 00:22:57.606 SGL Keyed: Supported 00:22:57.606 SGL Bit Bucket Descriptor: Not Supported 00:22:57.606 SGL Metadata Pointer: Not Supported 00:22:57.606 Oversized SGL: Not Supported 00:22:57.606 SGL Metadata Address: Not Supported 00:22:57.606 SGL Offset: Supported 00:22:57.606 Transport SGL Data Block: Not Supported 00:22:57.606 Replay Protected Memory Block: Not Supported 00:22:57.606 00:22:57.606 Firmware Slot Information 00:22:57.606 ========================= 00:22:57.606 Active slot: 0 00:22:57.606 00:22:57.606 00:22:57.606 Error Log 00:22:57.606 ========= 00:22:57.606 00:22:57.606 Active Namespaces 00:22:57.606 ================= 00:22:57.606 Discovery Log Page 00:22:57.606 ================== 00:22:57.606 Generation Counter: 2 00:22:57.606 Number of Records: 2 00:22:57.606 Record Format: 0 00:22:57.606 00:22:57.606 Discovery Log Entry 0 00:22:57.606 ---------------------- 00:22:57.606 Transport Type: 1 (RDMA) 00:22:57.606 Address Family: 1 (IPv4) 00:22:57.606 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:57.606 Entry Flags: 00:22:57.606 Duplicate Returned Information: 0 00:22:57.606 Explicit Persistent Connection Support for Discovery: 0 00:22:57.606 Transport Requirements: 00:22:57.606 Secure Channel: Not Specified 00:22:57.606 Port ID: 1 (0x0001) 00:22:57.606 Controller ID: 65535 (0xffff) 00:22:57.606 Admin Max SQ Size: 32 00:22:57.606 Transport Service Identifier: 4420 00:22:57.606 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:57.606 Transport Address: 192.168.100.8 00:22:57.606 Transport Specific Address Subtype - RDMA 00:22:57.606 RDMA QP Service Type: 1 (Reliable Connected) 00:22:57.606 RDMA Provider Type: 1 (No provider specified) 00:22:57.606 RDMA CM Service: 1 (RDMA_CM) 00:22:57.606 Discovery Log Entry 1 00:22:57.606 ---------------------- 00:22:57.606 Transport Type: 1 (RDMA) 00:22:57.606 Address Family: 1 (IPv4) 00:22:57.606 Subsystem Type: 2 (NVM Subsystem) 00:22:57.606 Entry Flags: 00:22:57.606 Duplicate Returned Information: 0 00:22:57.606 Explicit Persistent Connection Support for Discovery: 0 00:22:57.606 Transport Requirements: 00:22:57.606 Secure Channel: Not Specified 00:22:57.606 Port ID: 1 (0x0001) 00:22:57.606 Controller ID: 65535 (0xffff) 00:22:57.606 Admin Max SQ Size: 32 00:22:57.606 Transport Service Identifier: 4420 00:22:57.606 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:22:57.606 Transport Address: 192.168.100.8 00:22:57.606 Transport Specific Address Subtype - RDMA 00:22:57.606 RDMA QP Service Type: 1 (Reliable Connected) 00:22:57.606 RDMA Provider Type: 1 (No provider specified) 00:22:57.606 RDMA CM Service: 1 (RDMA_CM) 00:22:57.606 14:57:31 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:57.606 EAL: No free 2048 kB hugepages reported on node 1 00:22:57.864 get_feature(0x01) failed 00:22:57.864 get_feature(0x02) failed 00:22:57.864 get_feature(0x04) failed 00:22:57.864 ===================================================== 00:22:57.864 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:testnqn 00:22:57.864 ===================================================== 00:22:57.864 Controller Capabilities/Features 00:22:57.864 ================================ 00:22:57.864 Vendor ID: 0000 00:22:57.864 Subsystem Vendor ID: 0000 00:22:57.864 Serial Number: ae932029bca2b0696897 00:22:57.864 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:22:57.864 Firmware Version: 6.7.0-68 00:22:57.864 Recommended Arb Burst: 6 00:22:57.864 IEEE OUI Identifier: 00 00 00 00:22:57.864 Multi-path I/O 00:22:57.864 May have multiple subsystem ports: Yes 00:22:57.864 May have multiple controllers: Yes 00:22:57.864 Associated with SR-IOV VF: No 00:22:57.864 Max Data Transfer Size: 1048576 00:22:57.864 Max Number of Namespaces: 1024 00:22:57.864 Max Number of I/O Queues: 128 00:22:57.864 NVMe Specification Version (VS): 1.3 00:22:57.864 NVMe Specification Version (Identify): 1.3 00:22:57.864 Maximum Queue Entries: 128 00:22:57.864 Contiguous Queues Required: No 00:22:57.864 Arbitration Mechanisms Supported 00:22:57.864 Weighted Round Robin: Not Supported 00:22:57.864 Vendor Specific: Not Supported 00:22:57.864 Reset Timeout: 7500 ms 00:22:57.864 Doorbell Stride: 4 bytes 00:22:57.864 NVM Subsystem Reset: Not Supported 00:22:57.864 Command Sets Supported 00:22:57.864 NVM Command Set: Supported 00:22:57.864 Boot Partition: Not Supported 00:22:57.864 Memory Page Size Minimum: 4096 bytes 00:22:57.864 Memory Page Size Maximum: 4096 bytes 00:22:57.864 Persistent Memory Region: Not Supported 00:22:57.864 Optional Asynchronous Events Supported 00:22:57.864 Namespace Attribute Notices: Supported 00:22:57.864 Firmware Activation Notices: Not Supported 00:22:57.864 ANA Change Notices: Supported 00:22:57.864 PLE Aggregate Log Change Notices: Not Supported 00:22:57.864 LBA Status Info Alert Notices: Not Supported 00:22:57.864 EGE Aggregate Log Change Notices: Not Supported 00:22:57.864 Normal NVM Subsystem Shutdown event: Not Supported 00:22:57.864 Zone Descriptor Change Notices: Not Supported 00:22:57.864 Discovery Log Change Notices: Not Supported 00:22:57.864 Controller Attributes 00:22:57.864 128-bit Host Identifier: Supported 00:22:57.864 Non-Operational Permissive Mode: Not Supported 00:22:57.864 NVM Sets: Not Supported 00:22:57.864 Read Recovery Levels: Not Supported 00:22:57.864 Endurance Groups: Not Supported 00:22:57.864 Predictable Latency Mode: Not Supported 00:22:57.864 Traffic Based Keep ALive: Supported 00:22:57.864 Namespace Granularity: Not Supported 00:22:57.864 SQ Associations: Not Supported 00:22:57.864 UUID List: Not Supported 00:22:57.864 Multi-Domain Subsystem: Not Supported 00:22:57.864 Fixed Capacity Management: Not Supported 00:22:57.864 Variable Capacity Management: Not Supported 00:22:57.864 Delete Endurance Group: Not Supported 00:22:57.864 Delete NVM Set: Not Supported 00:22:57.864 Extended LBA Formats Supported: Not Supported 00:22:57.864 Flexible Data Placement Supported: Not Supported 00:22:57.864 00:22:57.864 Controller Memory Buffer Support 00:22:57.864 ================================ 00:22:57.864 Supported: No 00:22:57.864 00:22:57.864 Persistent Memory Region Support 00:22:57.864 ================================ 00:22:57.864 Supported: No 00:22:57.864 00:22:57.864 Admin Command Set Attributes 00:22:57.864 ============================ 00:22:57.864 Security Send/Receive: Not Supported 00:22:57.864 Format NVM: Not Supported 00:22:57.864 Firmware Activate/Download: Not Supported 00:22:57.864 Namespace Management: Not Supported 00:22:57.864 Device Self-Test: Not Supported 00:22:57.864 Directives: Not Supported 00:22:57.864 NVMe-MI: Not Supported 00:22:57.864 Virtualization Management: Not Supported 00:22:57.864 Doorbell Buffer Config: Not Supported 00:22:57.864 Get LBA Status Capability: Not Supported 00:22:57.864 Command & Feature Lockdown Capability: Not Supported 00:22:57.864 Abort Command Limit: 4 00:22:57.864 Async Event Request Limit: 4 00:22:57.864 Number of Firmware Slots: N/A 00:22:57.864 Firmware Slot 1 Read-Only: N/A 00:22:57.864 Firmware Activation Without Reset: N/A 00:22:57.864 Multiple Update Detection Support: N/A 00:22:57.864 Firmware Update Granularity: No Information Provided 00:22:57.864 Per-Namespace SMART Log: Yes 00:22:57.864 Asymmetric Namespace Access Log Page: Supported 00:22:57.864 ANA Transition Time : 10 sec 00:22:57.864 00:22:57.864 Asymmetric Namespace Access Capabilities 00:22:57.864 ANA Optimized State : Supported 00:22:57.864 ANA Non-Optimized State : Supported 00:22:57.864 ANA Inaccessible State : Supported 00:22:57.864 ANA Persistent Loss State : Supported 00:22:57.864 ANA Change State : Supported 00:22:57.864 ANAGRPID is not changed : No 00:22:57.864 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:22:57.864 00:22:57.864 ANA Group Identifier Maximum : 128 00:22:57.864 Number of ANA Group Identifiers : 128 00:22:57.864 Max Number of Allowed Namespaces : 1024 00:22:57.864 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:22:57.864 Command Effects Log Page: Supported 00:22:57.864 Get Log Page Extended Data: Supported 00:22:57.864 Telemetry Log Pages: Not Supported 00:22:57.864 Persistent Event Log Pages: Not Supported 00:22:57.864 Supported Log Pages Log Page: May Support 00:22:57.864 Commands Supported & Effects Log Page: Not Supported 00:22:57.864 Feature Identifiers & Effects Log Page:May Support 00:22:57.864 NVMe-MI Commands & Effects Log Page: May Support 00:22:57.864 Data Area 4 for Telemetry Log: Not Supported 00:22:57.864 Error Log Page Entries Supported: 128 00:22:57.864 Keep Alive: Supported 00:22:57.864 Keep Alive Granularity: 1000 ms 00:22:57.865 00:22:57.865 NVM Command Set Attributes 00:22:57.865 ========================== 00:22:57.865 Submission Queue Entry Size 00:22:57.865 Max: 64 00:22:57.865 Min: 64 00:22:57.865 Completion Queue Entry Size 00:22:57.865 Max: 16 00:22:57.865 Min: 16 00:22:57.865 Number of Namespaces: 1024 00:22:57.865 Compare Command: Not Supported 00:22:57.865 Write Uncorrectable Command: Not Supported 00:22:57.865 Dataset Management Command: Supported 00:22:57.865 Write Zeroes Command: Supported 00:22:57.865 Set Features Save Field: Not Supported 00:22:57.865 Reservations: Not Supported 00:22:57.865 Timestamp: Not Supported 00:22:57.865 Copy: Not Supported 00:22:57.865 Volatile Write Cache: Present 00:22:57.865 Atomic Write Unit (Normal): 1 00:22:57.865 Atomic Write Unit (PFail): 1 00:22:57.865 Atomic Compare & Write Unit: 1 00:22:57.865 Fused Compare & Write: Not Supported 00:22:57.865 Scatter-Gather List 00:22:57.865 SGL Command Set: Supported 00:22:57.865 SGL Keyed: Supported 00:22:57.865 SGL Bit Bucket Descriptor: Not Supported 00:22:57.865 SGL Metadata Pointer: Not Supported 00:22:57.865 Oversized SGL: Not Supported 00:22:57.865 SGL Metadata Address: Not Supported 00:22:57.865 SGL Offset: Supported 00:22:57.865 Transport SGL Data Block: Not Supported 00:22:57.865 Replay Protected Memory Block: Not Supported 00:22:57.865 00:22:57.865 Firmware Slot Information 00:22:57.865 ========================= 00:22:57.865 Active slot: 0 00:22:57.865 00:22:57.865 Asymmetric Namespace Access 00:22:57.865 =========================== 00:22:57.865 Change Count : 0 00:22:57.865 Number of ANA Group Descriptors : 1 00:22:57.865 ANA Group Descriptor : 0 00:22:57.865 ANA Group ID : 1 00:22:57.865 Number of NSID Values : 1 00:22:57.865 Change Count : 0 00:22:57.865 ANA State : 1 00:22:57.865 Namespace Identifier : 1 00:22:57.865 00:22:57.865 Commands Supported and Effects 00:22:57.865 ============================== 00:22:57.865 Admin Commands 00:22:57.865 -------------- 00:22:57.865 Get Log Page (02h): Supported 00:22:57.865 Identify (06h): Supported 00:22:57.865 Abort (08h): Supported 00:22:57.865 Set Features (09h): Supported 00:22:57.865 Get Features (0Ah): Supported 00:22:57.865 Asynchronous Event Request (0Ch): Supported 00:22:57.865 Keep Alive (18h): Supported 00:22:57.865 I/O Commands 00:22:57.865 ------------ 00:22:57.865 Flush (00h): Supported 00:22:57.865 Write (01h): Supported LBA-Change 00:22:57.865 Read (02h): Supported 00:22:57.865 Write Zeroes (08h): Supported LBA-Change 00:22:57.865 Dataset Management (09h): Supported 00:22:57.865 00:22:57.865 Error Log 00:22:57.865 ========= 00:22:57.865 Entry: 0 00:22:57.865 Error Count: 0x3 00:22:57.865 Submission Queue Id: 0x0 00:22:57.865 Command Id: 0x5 00:22:57.865 Phase Bit: 0 00:22:57.865 Status Code: 0x2 00:22:57.865 Status Code Type: 0x0 00:22:57.865 Do Not Retry: 1 00:22:57.865 Error Location: 0x28 00:22:57.865 LBA: 0x0 00:22:57.865 Namespace: 0x0 00:22:57.865 Vendor Log Page: 0x0 00:22:57.865 ----------- 00:22:57.865 Entry: 1 00:22:57.865 Error Count: 0x2 00:22:57.865 Submission Queue Id: 0x0 00:22:57.865 Command Id: 0x5 00:22:57.865 Phase Bit: 0 00:22:57.865 Status Code: 0x2 00:22:57.865 Status Code Type: 0x0 00:22:57.865 Do Not Retry: 1 00:22:57.865 Error Location: 0x28 00:22:57.865 LBA: 0x0 00:22:57.865 Namespace: 0x0 00:22:57.865 Vendor Log Page: 0x0 00:22:57.865 ----------- 00:22:57.865 Entry: 2 00:22:57.865 Error Count: 0x1 00:22:57.865 Submission Queue Id: 0x0 00:22:57.865 Command Id: 0x0 00:22:57.865 Phase Bit: 0 00:22:57.865 Status Code: 0x2 00:22:57.865 Status Code Type: 0x0 00:22:57.865 Do Not Retry: 1 00:22:57.865 Error Location: 0x28 00:22:57.865 LBA: 0x0 00:22:57.865 Namespace: 0x0 00:22:57.865 Vendor Log Page: 0x0 00:22:57.865 00:22:57.865 Number of Queues 00:22:57.865 ================ 00:22:57.865 Number of I/O Submission Queues: 128 00:22:57.865 Number of I/O Completion Queues: 128 00:22:57.865 00:22:57.865 ZNS Specific Controller Data 00:22:57.865 ============================ 00:22:57.865 Zone Append Size Limit: 0 00:22:57.865 00:22:57.865 00:22:57.865 Active Namespaces 00:22:57.865 ================= 00:22:57.865 get_feature(0x05) failed 00:22:57.865 Namespace ID:1 00:22:57.865 Command Set Identifier: NVM (00h) 00:22:57.865 Deallocate: Supported 00:22:57.865 Deallocated/Unwritten Error: Not Supported 00:22:57.865 Deallocated Read Value: Unknown 00:22:57.865 Deallocate in Write Zeroes: Not Supported 00:22:57.865 Deallocated Guard Field: 0xFFFF 00:22:57.865 Flush: Supported 00:22:57.865 Reservation: Not Supported 00:22:57.865 Namespace Sharing Capabilities: Multiple Controllers 00:22:57.865 Size (in LBAs): 3125627568 (1490GiB) 00:22:57.865 Capacity (in LBAs): 3125627568 (1490GiB) 00:22:57.865 Utilization (in LBAs): 3125627568 (1490GiB) 00:22:57.865 UUID: 61dc4b71-0796-47af-83ac-2b036a607f41 00:22:57.865 Thin Provisioning: Not Supported 00:22:57.865 Per-NS Atomic Units: Yes 00:22:57.865 Atomic Boundary Size (Normal): 0 00:22:57.865 Atomic Boundary Size (PFail): 0 00:22:57.865 Atomic Boundary Offset: 0 00:22:57.865 NGUID/EUI64 Never Reused: No 00:22:57.865 ANA group ID: 1 00:22:57.865 Namespace Write Protected: No 00:22:57.865 Number of LBA Formats: 1 00:22:57.865 Current LBA Format: LBA Format #00 00:22:57.865 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:57.865 00:22:57.865 14:57:31 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:22:57.865 14:57:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:57.865 14:57:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:22:57.865 14:57:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:22:57.865 14:57:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:22:57.865 14:57:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:22:57.865 14:57:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:57.865 14:57:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:22:57.865 rmmod nvme_rdma 00:22:57.865 rmmod nvme_fabrics 00:22:57.865 14:57:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:57.865 14:57:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:22:57.865 14:57:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:22:57.865 14:57:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:22:57.865 14:57:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:57.865 14:57:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:22:57.865 14:57:31 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:22:57.865 14:57:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:22:57.865 14:57:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:22:57.865 14:57:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:57.865 14:57:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:57.865 14:57:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:22:57.865 14:57:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:57.865 14:57:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:22:57.865 14:57:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_rdma nvmet 00:22:57.865 14:57:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:23:00.391 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:23:00.391 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:23:00.391 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:23:00.391 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:23:00.649 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:23:00.649 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:23:00.649 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:23:00.649 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:23:00.649 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:23:00.649 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:23:00.649 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:23:00.649 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:23:00.649 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:23:00.649 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:23:00.649 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:23:00.649 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:23:02.024 0000:5f:00.0 (8086 0a54): nvme -> vfio-pci 00:23:02.283 00:23:02.283 real 0m14.043s 00:23:02.283 user 0m3.949s 00:23:02.283 sys 0m7.900s 00:23:02.283 14:57:35 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:02.283 14:57:35 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:23:02.283 ************************************ 00:23:02.283 END TEST nvmf_identify_kernel_target 00:23:02.283 ************************************ 00:23:02.283 14:57:36 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:23:02.283 14:57:36 nvmf_rdma -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:23:02.283 14:57:36 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:02.283 14:57:36 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:02.283 14:57:36 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:23:02.283 ************************************ 00:23:02.283 START TEST nvmf_auth_host 00:23:02.283 ************************************ 00:23:02.283 14:57:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:23:02.283 * Looking for test storage... 00:23:02.283 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:23:02.283 14:57:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:02.283 14:57:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:23:02.283 14:57:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:02.283 14:57:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:02.283 14:57:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:02.283 14:57:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:02.283 14:57:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:02.283 14:57:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:02.283 14:57:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:02.283 14:57:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:02.283 14:57:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:02.283 14:57:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:02.283 14:57:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:23:02.283 14:57:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:23:02.283 14:57:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:02.283 14:57:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:02.283 14:57:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:02.283 14:57:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:02.283 14:57:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:02.283 14:57:36 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:02.283 14:57:36 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:02.283 14:57:36 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:02.283 14:57:36 nvmf_rdma.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.283 14:57:36 nvmf_rdma.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.283 14:57:36 nvmf_rdma.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.283 14:57:36 nvmf_rdma.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:23:02.283 14:57:36 nvmf_rdma.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.283 14:57:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:23:02.283 14:57:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:02.283 14:57:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:02.283 14:57:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:02.283 14:57:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:02.283 14:57:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:02.283 14:57:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:02.283 14:57:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:02.283 14:57:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:02.283 14:57:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:23:02.283 14:57:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:23:02.283 14:57:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:23:02.283 14:57:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:23:02.283 14:57:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:02.283 14:57:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:02.283 14:57:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:23:02.283 14:57:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:23:02.283 14:57:36 nvmf_rdma.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:23:02.283 14:57:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:23:02.283 14:57:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:02.283 14:57:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:02.283 14:57:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:02.283 14:57:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:02.283 14:57:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:02.283 14:57:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:02.283 14:57:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:02.283 14:57:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:02.283 14:57:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:02.283 14:57:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:23:02.283 14:57:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.545 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:07.546 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:23:07.546 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:07.546 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:07.546 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:07.546 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:07.546 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:07.546 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:23:07.546 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:07.546 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:23:07.546 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:23:07.546 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:23:07.546 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:23:07.546 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:23:07.546 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:23:07.546 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:07.546 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:07.546 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:07.546 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:07.546 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:07.546 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:07.546 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:07.546 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:07.546 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:07.546 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:07.546 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:07.546 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:07.546 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:23:07.546 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:23:07.546 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:23:07.546 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:23:07.546 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:23:07.546 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:07.546 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:07.546 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:23:07.546 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:23:07.546 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:23:07.546 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:23:07.546 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:07.546 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:07.546 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:23:07.546 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:23:07.546 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:07.546 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:23:07.546 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:23:07.546 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:23:07.546 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:23:07.546 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:07.546 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:07.546 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:23:07.546 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:23:07.546 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:07.546 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:23:07.546 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:07.546 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:07.546 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:23:07.546 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:07.546 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:07.546 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:23:07.546 Found net devices under 0000:da:00.0: mlx_0_0 00:23:07.546 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:07.546 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:07.546 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:07.546 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:23:07.546 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:07.546 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:07.546 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:23:07.546 Found net devices under 0000:da:00.1: mlx_0_1 00:23:07.546 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:07.546 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:07.546 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:23:07.546 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:07.546 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:23:07.546 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:23:07.546 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@420 -- # rdma_device_init 00:23:07.546 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:23:07.546 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@58 -- # uname 00:23:07.546 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:23:07.546 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@62 -- # modprobe ib_cm 00:23:07.546 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@63 -- # modprobe ib_core 00:23:07.546 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@64 -- # modprobe ib_umad 00:23:07.546 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:23:07.546 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@66 -- # modprobe iw_cm 00:23:07.546 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:23:07.546 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:23:07.806 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@502 -- # allocate_nic_ips 00:23:07.806 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:07.806 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@73 -- # get_rdma_if_list 00:23:07.806 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:07.806 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:23:07.806 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:23:07.806 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:07.806 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:23:07.806 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:07.806 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:07.806 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:07.806 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@104 -- # echo mlx_0_0 00:23:07.806 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@105 -- # continue 2 00:23:07.806 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:07.806 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:07.806 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:07.806 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:07.806 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:07.806 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@104 -- # echo mlx_0_1 00:23:07.806 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@105 -- # continue 2 00:23:07.806 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:23:07.806 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:23:07.806 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:23:07.806 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:07.806 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:23:07.806 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:07.806 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:23:07.806 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:23:07.806 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:23:07.806 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:07.806 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:23:07.806 altname enp218s0f0np0 00:23:07.806 altname ens818f0np0 00:23:07.806 inet 192.168.100.8/24 scope global mlx_0_0 00:23:07.806 valid_lft forever preferred_lft forever 00:23:07.806 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:23:07.806 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:23:07.806 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:23:07.806 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:23:07.806 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:07.806 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:07.806 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:23:07.806 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:23:07.806 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:23:07.806 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:07.806 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:23:07.806 altname enp218s0f1np1 00:23:07.806 altname ens818f1np1 00:23:07.806 inet 192.168.100.9/24 scope global mlx_0_1 00:23:07.806 valid_lft forever preferred_lft forever 00:23:07.806 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:23:07.806 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:07.806 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:07.806 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:23:07.806 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:23:07.806 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@86 -- # get_rdma_if_list 00:23:07.806 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:07.806 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:23:07.806 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:23:07.806 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:07.806 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:23:07.806 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:07.806 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:07.806 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:07.806 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@104 -- # echo mlx_0_0 00:23:07.806 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@105 -- # continue 2 00:23:07.806 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:07.806 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:07.806 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:07.806 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:07.806 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:07.806 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@104 -- # echo mlx_0_1 00:23:07.806 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@105 -- # continue 2 00:23:07.806 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:23:07.806 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:23:07.806 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:23:07.806 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:07.806 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:23:07.806 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:07.806 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:23:07.807 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:23:07.807 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:23:07.807 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:23:07.807 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:07.807 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:07.807 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:23:07.807 192.168.100.9' 00:23:07.807 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:23:07.807 192.168.100.9' 00:23:07.807 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@457 -- # head -n 1 00:23:07.807 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:07.807 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:23:07.807 192.168.100.9' 00:23:07.807 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@458 -- # tail -n +2 00:23:07.807 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@458 -- # head -n 1 00:23:07.807 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:07.807 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:23:07.807 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:07.807 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:23:07.807 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:23:07.807 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:23:07.807 14:57:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:23:07.807 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:07.807 14:57:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:07.807 14:57:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.807 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=2941811 00:23:07.807 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 2941811 00:23:07.807 14:57:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:23:07.807 14:57:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 2941811 ']' 00:23:07.807 14:57:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:07.807 14:57:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:07.807 14:57:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:07.807 14:57:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:07.807 14:57:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.742 14:57:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:08.742 14:57:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:23:08.742 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:08.742 14:57:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:08.742 14:57:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.742 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:08.742 14:57:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:23:08.742 14:57:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:23:08.742 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:08.742 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:08.742 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:08.742 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:23:08.742 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:08.742 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:08.742 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d8f429825973adb1305daef9ce250392 00:23:08.742 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:23:08.742 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.fK2 00:23:08.742 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d8f429825973adb1305daef9ce250392 0 00:23:08.742 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d8f429825973adb1305daef9ce250392 0 00:23:08.742 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:08.742 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:08.742 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d8f429825973adb1305daef9ce250392 00:23:08.742 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:23:08.742 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:08.742 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.fK2 00:23:08.742 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.fK2 00:23:08.742 14:57:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.fK2 00:23:08.742 14:57:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:23:08.742 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:08.742 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:08.742 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:08.742 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:23:08.742 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:23:08.742 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:23:08.742 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=452957813520d3e37d9ad11ad94e2438f7342dbb95a3f4f20a2608cf2fc35361 00:23:08.742 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:23:08.742 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.PlY 00:23:08.742 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 452957813520d3e37d9ad11ad94e2438f7342dbb95a3f4f20a2608cf2fc35361 3 00:23:08.742 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 452957813520d3e37d9ad11ad94e2438f7342dbb95a3f4f20a2608cf2fc35361 3 00:23:08.742 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:08.742 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:08.742 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=452957813520d3e37d9ad11ad94e2438f7342dbb95a3f4f20a2608cf2fc35361 00:23:08.742 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:23:08.742 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:08.742 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.PlY 00:23:08.742 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.PlY 00:23:08.742 14:57:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.PlY 00:23:08.742 14:57:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:23:08.742 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:08.742 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:08.742 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:08.742 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:23:08.742 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:23:08.742 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:08.742 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=4dd2a95e3a0c08924f199c10712e90578dab1e746383f381 00:23:08.742 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:23:08.742 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.1hc 00:23:08.742 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 4dd2a95e3a0c08924f199c10712e90578dab1e746383f381 0 00:23:08.742 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 4dd2a95e3a0c08924f199c10712e90578dab1e746383f381 0 00:23:08.742 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:08.742 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:08.742 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=4dd2a95e3a0c08924f199c10712e90578dab1e746383f381 00:23:08.742 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:23:08.742 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:09.001 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.1hc 00:23:09.001 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.1hc 00:23:09.001 14:57:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.1hc 00:23:09.001 14:57:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:23:09.001 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:09.001 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:09.001 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:09.001 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:23:09.001 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:23:09.001 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:09.001 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=12348db7e1f0594bd74753cf4d4499ff953d5c6aad3fade6 00:23:09.001 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:23:09.001 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.CkJ 00:23:09.001 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 12348db7e1f0594bd74753cf4d4499ff953d5c6aad3fade6 2 00:23:09.001 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 12348db7e1f0594bd74753cf4d4499ff953d5c6aad3fade6 2 00:23:09.001 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:09.001 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:09.001 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=12348db7e1f0594bd74753cf4d4499ff953d5c6aad3fade6 00:23:09.001 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:23:09.001 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:09.001 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.CkJ 00:23:09.001 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.CkJ 00:23:09.002 14:57:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.CkJ 00:23:09.002 14:57:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:23:09.002 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:09.002 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:09.002 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:09.002 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:23:09.002 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:09.002 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:09.002 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9b2f0946ffed0ec3353d31716ac60a88 00:23:09.002 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:23:09.002 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.efu 00:23:09.002 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9b2f0946ffed0ec3353d31716ac60a88 1 00:23:09.002 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9b2f0946ffed0ec3353d31716ac60a88 1 00:23:09.002 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:09.002 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:09.002 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9b2f0946ffed0ec3353d31716ac60a88 00:23:09.002 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:23:09.002 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:09.002 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.efu 00:23:09.002 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.efu 00:23:09.002 14:57:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.efu 00:23:09.002 14:57:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:23:09.002 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:09.002 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:09.002 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:09.002 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:23:09.002 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:09.002 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:09.002 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=6ef47d93c225e9b4958dc3a9a679d171 00:23:09.002 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:23:09.002 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Wky 00:23:09.002 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 6ef47d93c225e9b4958dc3a9a679d171 1 00:23:09.002 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 6ef47d93c225e9b4958dc3a9a679d171 1 00:23:09.002 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:09.002 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:09.002 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=6ef47d93c225e9b4958dc3a9a679d171 00:23:09.002 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:23:09.002 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:09.002 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Wky 00:23:09.002 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Wky 00:23:09.002 14:57:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.Wky 00:23:09.002 14:57:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:23:09.002 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:09.002 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:09.002 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:09.002 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:23:09.002 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:23:09.002 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:09.002 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c72b89eaa9fe48338104bf8fab5760e18fb402c1ff91b000 00:23:09.002 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:23:09.002 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.j0s 00:23:09.002 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c72b89eaa9fe48338104bf8fab5760e18fb402c1ff91b000 2 00:23:09.002 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c72b89eaa9fe48338104bf8fab5760e18fb402c1ff91b000 2 00:23:09.002 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:09.002 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:09.002 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c72b89eaa9fe48338104bf8fab5760e18fb402c1ff91b000 00:23:09.002 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:23:09.002 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:09.002 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.j0s 00:23:09.002 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.j0s 00:23:09.002 14:57:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.j0s 00:23:09.002 14:57:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:23:09.002 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:09.002 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:09.002 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:09.002 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:23:09.002 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:09.260 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:09.260 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=cd9649e30b201d3592fd6b73a4e1c01a 00:23:09.260 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:23:09.260 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.CI1 00:23:09.260 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key cd9649e30b201d3592fd6b73a4e1c01a 0 00:23:09.260 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 cd9649e30b201d3592fd6b73a4e1c01a 0 00:23:09.260 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:09.260 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:09.260 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=cd9649e30b201d3592fd6b73a4e1c01a 00:23:09.260 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:23:09.260 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:09.260 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.CI1 00:23:09.260 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.CI1 00:23:09.260 14:57:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.CI1 00:23:09.260 14:57:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:23:09.260 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:09.260 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:09.260 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:09.260 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:23:09.260 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:23:09.260 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:23:09.260 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=27848a04a3adb1e7132b299b6aaefb8c7f76d726da98cdbbf698a319db699df4 00:23:09.260 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:23:09.260 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.TKv 00:23:09.261 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 27848a04a3adb1e7132b299b6aaefb8c7f76d726da98cdbbf698a319db699df4 3 00:23:09.261 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 27848a04a3adb1e7132b299b6aaefb8c7f76d726da98cdbbf698a319db699df4 3 00:23:09.261 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:09.261 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:09.261 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=27848a04a3adb1e7132b299b6aaefb8c7f76d726da98cdbbf698a319db699df4 00:23:09.261 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:23:09.261 14:57:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:09.261 14:57:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.TKv 00:23:09.261 14:57:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.TKv 00:23:09.261 14:57:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.TKv 00:23:09.261 14:57:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:23:09.261 14:57:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2941811 00:23:09.261 14:57:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 2941811 ']' 00:23:09.261 14:57:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:09.261 14:57:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:09.261 14:57:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:09.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:09.261 14:57:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:09.261 14:57:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.518 14:57:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:09.518 14:57:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:23:09.518 14:57:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:09.518 14:57:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.fK2 00:23:09.518 14:57:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.518 14:57:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.518 14:57:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.518 14:57:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.PlY ]] 00:23:09.518 14:57:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.PlY 00:23:09.518 14:57:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.518 14:57:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.518 14:57:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.518 14:57:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:09.518 14:57:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.1hc 00:23:09.518 14:57:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.518 14:57:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.518 14:57:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.518 14:57:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.CkJ ]] 00:23:09.518 14:57:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.CkJ 00:23:09.518 14:57:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.518 14:57:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.519 14:57:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.519 14:57:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:09.519 14:57:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.efu 00:23:09.519 14:57:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.519 14:57:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.519 14:57:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.519 14:57:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.Wky ]] 00:23:09.519 14:57:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Wky 00:23:09.519 14:57:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.519 14:57:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.519 14:57:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.519 14:57:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:09.519 14:57:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.j0s 00:23:09.519 14:57:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.519 14:57:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.519 14:57:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.519 14:57:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.CI1 ]] 00:23:09.519 14:57:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.CI1 00:23:09.519 14:57:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.519 14:57:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.519 14:57:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.519 14:57:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:09.519 14:57:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.TKv 00:23:09.519 14:57:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.519 14:57:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.519 14:57:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.519 14:57:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:23:09.519 14:57:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:23:09.519 14:57:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:23:09.519 14:57:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:09.519 14:57:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:09.519 14:57:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:09.519 14:57:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:09.519 14:57:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:09.519 14:57:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:09.519 14:57:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:09.519 14:57:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:09.519 14:57:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:09.519 14:57:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:09.519 14:57:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 192.168.100.8 00:23:09.519 14:57:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=192.168.100.8 00:23:09.519 14:57:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:23:09.519 14:57:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:09.519 14:57:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:09.519 14:57:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:09.519 14:57:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:23:09.519 14:57:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:23:09.519 14:57:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:23:09.519 14:57:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:09.519 14:57:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:23:12.049 Waiting for block devices as requested 00:23:12.049 0000:5f:00.0 (8086 0a54): vfio-pci -> nvme 00:23:12.307 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:23:12.307 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:23:12.307 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:23:12.565 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:23:12.565 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:23:12.565 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:23:12.565 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:23:12.823 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:23:12.824 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:23:12.824 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:23:12.824 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:23:13.082 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:23:13.082 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:23:13.082 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:23:13.340 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:23:13.340 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:23:13.905 14:57:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:13.905 14:57:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:13.905 14:57:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:23:13.905 14:57:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:23:13.906 14:57:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:13.906 14:57:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:23:13.906 14:57:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:23:13.906 14:57:47 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:23:13.906 14:57:47 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:23:13.906 No valid GPT data, bailing 00:23:13.906 14:57:47 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:13.906 14:57:47 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:23:13.906 14:57:47 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:23:13.906 14:57:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:23:13.906 14:57:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:23:13.906 14:57:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:13.906 14:57:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:13.906 14:57:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:13.906 14:57:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:23:13.906 14:57:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:23:13.906 14:57:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:23:13.906 14:57:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:23:13.906 14:57:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 192.168.100.8 00:23:13.906 14:57:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@672 -- # echo rdma 00:23:13.906 14:57:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:23:13.906 14:57:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:23:13.906 14:57:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:13.906 14:57:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -a 192.168.100.8 -t rdma -s 4420 00:23:14.164 00:23:14.164 Discovery Log Number of Records 2, Generation counter 2 00:23:14.164 =====Discovery Log Entry 0====== 00:23:14.164 trtype: rdma 00:23:14.164 adrfam: ipv4 00:23:14.164 subtype: current discovery subsystem 00:23:14.164 treq: not specified, sq flow control disable supported 00:23:14.164 portid: 1 00:23:14.164 trsvcid: 4420 00:23:14.164 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:14.164 traddr: 192.168.100.8 00:23:14.164 eflags: none 00:23:14.164 rdma_prtype: not specified 00:23:14.164 rdma_qptype: connected 00:23:14.164 rdma_cms: rdma-cm 00:23:14.164 rdma_pkey: 0x0000 00:23:14.164 =====Discovery Log Entry 1====== 00:23:14.164 trtype: rdma 00:23:14.164 adrfam: ipv4 00:23:14.164 subtype: nvme subsystem 00:23:14.164 treq: not specified, sq flow control disable supported 00:23:14.164 portid: 1 00:23:14.164 trsvcid: 4420 00:23:14.164 subnqn: nqn.2024-02.io.spdk:cnode0 00:23:14.164 traddr: 192.168.100.8 00:23:14.164 eflags: none 00:23:14.164 rdma_prtype: not specified 00:23:14.164 rdma_qptype: connected 00:23:14.164 rdma_cms: rdma-cm 00:23:14.164 rdma_pkey: 0x0000 00:23:14.164 14:57:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:14.164 14:57:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:23:14.164 14:57:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:23:14.164 14:57:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:14.164 14:57:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:14.164 14:57:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:14.165 14:57:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:14.165 14:57:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:14.165 14:57:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGRkMmE5NWUzYTBjMDg5MjRmMTk5YzEwNzEyZTkwNTc4ZGFiMWU3NDYzODNmMzgxtqVLYw==: 00:23:14.165 14:57:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTIzNDhkYjdlMWYwNTk0YmQ3NDc1M2NmNGQ0NDk5ZmY5NTNkNWM2YWFkM2ZhZGU2TE1zGw==: 00:23:14.165 14:57:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:14.165 14:57:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:14.165 14:57:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGRkMmE5NWUzYTBjMDg5MjRmMTk5YzEwNzEyZTkwNTc4ZGFiMWU3NDYzODNmMzgxtqVLYw==: 00:23:14.165 14:57:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTIzNDhkYjdlMWYwNTk0YmQ3NDc1M2NmNGQ0NDk5ZmY5NTNkNWM2YWFkM2ZhZGU2TE1zGw==: ]] 00:23:14.165 14:57:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTIzNDhkYjdlMWYwNTk0YmQ3NDc1M2NmNGQ0NDk5ZmY5NTNkNWM2YWFkM2ZhZGU2TE1zGw==: 00:23:14.165 14:57:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:23:14.165 14:57:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:23:14.165 14:57:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:23:14.165 14:57:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:14.165 14:57:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:23:14.165 14:57:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:14.165 14:57:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:23:14.165 14:57:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:14.165 14:57:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:14.165 14:57:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:14.165 14:57:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:14.165 14:57:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.165 14:57:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.165 14:57:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.165 14:57:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:14.165 14:57:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:14.165 14:57:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:14.165 14:57:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:14.165 14:57:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:14.165 14:57:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:14.165 14:57:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:14.165 14:57:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:14.165 14:57:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:14.165 14:57:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:14.165 14:57:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:14.165 14:57:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:14.165 14:57:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.165 14:57:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.423 nvme0n1 00:23:14.423 14:57:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.423 14:57:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:14.423 14:57:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:14.423 14:57:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.423 14:57:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.423 14:57:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.423 14:57:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:14.423 14:57:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:14.423 14:57:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.423 14:57:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.423 14:57:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.423 14:57:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:14.423 14:57:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:14.423 14:57:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:14.423 14:57:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:23:14.423 14:57:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:14.423 14:57:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:14.423 14:57:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:14.423 14:57:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:14.423 14:57:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDhmNDI5ODI1OTczYWRiMTMwNWRhZWY5Y2UyNTAzOTL6lXmu: 00:23:14.423 14:57:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDUyOTU3ODEzNTIwZDNlMzdkOWFkMTFhZDk0ZTI0MzhmNzM0MmRiYjk1YTNmNGYyMGEyNjA4Y2YyZmMzNTM2MTp2VGQ=: 00:23:14.423 14:57:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:14.423 14:57:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:14.423 14:57:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDhmNDI5ODI1OTczYWRiMTMwNWRhZWY5Y2UyNTAzOTL6lXmu: 00:23:14.423 14:57:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDUyOTU3ODEzNTIwZDNlMzdkOWFkMTFhZDk0ZTI0MzhmNzM0MmRiYjk1YTNmNGYyMGEyNjA4Y2YyZmMzNTM2MTp2VGQ=: ]] 00:23:14.423 14:57:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDUyOTU3ODEzNTIwZDNlMzdkOWFkMTFhZDk0ZTI0MzhmNzM0MmRiYjk1YTNmNGYyMGEyNjA4Y2YyZmMzNTM2MTp2VGQ=: 00:23:14.423 14:57:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:23:14.423 14:57:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:14.423 14:57:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:14.423 14:57:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:14.423 14:57:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:14.423 14:57:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:14.423 14:57:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:14.423 14:57:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.423 14:57:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.423 14:57:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.423 14:57:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:14.423 14:57:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:14.423 14:57:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:14.423 14:57:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:14.423 14:57:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:14.423 14:57:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:14.423 14:57:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:14.423 14:57:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:14.423 14:57:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:14.423 14:57:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:14.423 14:57:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:14.423 14:57:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:14.423 14:57:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.423 14:57:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.681 nvme0n1 00:23:14.681 14:57:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.681 14:57:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:14.681 14:57:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:14.681 14:57:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.681 14:57:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.681 14:57:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.681 14:57:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:14.681 14:57:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:14.681 14:57:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.681 14:57:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.681 14:57:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.681 14:57:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:14.681 14:57:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:14.681 14:57:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:14.681 14:57:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:14.681 14:57:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:14.681 14:57:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:14.681 14:57:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGRkMmE5NWUzYTBjMDg5MjRmMTk5YzEwNzEyZTkwNTc4ZGFiMWU3NDYzODNmMzgxtqVLYw==: 00:23:14.681 14:57:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTIzNDhkYjdlMWYwNTk0YmQ3NDc1M2NmNGQ0NDk5ZmY5NTNkNWM2YWFkM2ZhZGU2TE1zGw==: 00:23:14.681 14:57:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:14.681 14:57:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:14.681 14:57:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGRkMmE5NWUzYTBjMDg5MjRmMTk5YzEwNzEyZTkwNTc4ZGFiMWU3NDYzODNmMzgxtqVLYw==: 00:23:14.681 14:57:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTIzNDhkYjdlMWYwNTk0YmQ3NDc1M2NmNGQ0NDk5ZmY5NTNkNWM2YWFkM2ZhZGU2TE1zGw==: ]] 00:23:14.681 14:57:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTIzNDhkYjdlMWYwNTk0YmQ3NDc1M2NmNGQ0NDk5ZmY5NTNkNWM2YWFkM2ZhZGU2TE1zGw==: 00:23:14.681 14:57:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:23:14.681 14:57:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:14.681 14:57:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:14.681 14:57:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:14.681 14:57:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:14.681 14:57:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:14.681 14:57:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:14.681 14:57:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.681 14:57:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.681 14:57:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.681 14:57:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:14.681 14:57:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:14.681 14:57:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:14.681 14:57:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:14.681 14:57:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:14.681 14:57:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:14.681 14:57:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:14.681 14:57:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:14.681 14:57:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:14.681 14:57:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:14.681 14:57:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:14.681 14:57:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:14.681 14:57:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.681 14:57:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.939 nvme0n1 00:23:14.939 14:57:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.939 14:57:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:14.939 14:57:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.939 14:57:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.939 14:57:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:14.939 14:57:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.939 14:57:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:14.939 14:57:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:14.939 14:57:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.939 14:57:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.939 14:57:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.939 14:57:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:14.939 14:57:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:23:14.939 14:57:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:14.939 14:57:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:14.939 14:57:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:14.939 14:57:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:14.939 14:57:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWIyZjA5NDZmZmVkMGVjMzM1M2QzMTcxNmFjNjBhODgBCIpz: 00:23:14.939 14:57:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmVmNDdkOTNjMjI1ZTliNDk1OGRjM2E5YTY3OWQxNzFpLeuz: 00:23:14.939 14:57:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:14.939 14:57:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:14.939 14:57:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWIyZjA5NDZmZmVkMGVjMzM1M2QzMTcxNmFjNjBhODgBCIpz: 00:23:14.939 14:57:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmVmNDdkOTNjMjI1ZTliNDk1OGRjM2E5YTY3OWQxNzFpLeuz: ]] 00:23:14.939 14:57:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmVmNDdkOTNjMjI1ZTliNDk1OGRjM2E5YTY3OWQxNzFpLeuz: 00:23:14.939 14:57:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:23:14.939 14:57:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:14.939 14:57:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:14.939 14:57:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:14.939 14:57:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:14.939 14:57:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:14.939 14:57:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:14.939 14:57:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.939 14:57:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.939 14:57:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.939 14:57:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:14.939 14:57:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:14.939 14:57:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:14.939 14:57:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:14.939 14:57:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:14.939 14:57:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:14.939 14:57:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:14.939 14:57:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:14.939 14:57:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:14.939 14:57:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:14.939 14:57:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:14.939 14:57:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:14.939 14:57:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.939 14:57:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.197 nvme0n1 00:23:15.197 14:57:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.197 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:15.197 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:15.197 14:57:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.197 14:57:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.197 14:57:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.197 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:15.197 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:15.197 14:57:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.197 14:57:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.197 14:57:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.197 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:15.197 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:23:15.197 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:15.197 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:15.197 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:15.197 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:15.197 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzcyYjg5ZWFhOWZlNDgzMzgxMDRiZjhmYWI1NzYwZTE4ZmI0MDJjMWZmOTFiMDAw7SZ+BA==: 00:23:15.197 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2Q5NjQ5ZTMwYjIwMWQzNTkyZmQ2YjczYTRlMWMwMWFNse/e: 00:23:15.197 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:15.197 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:15.197 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzcyYjg5ZWFhOWZlNDgzMzgxMDRiZjhmYWI1NzYwZTE4ZmI0MDJjMWZmOTFiMDAw7SZ+BA==: 00:23:15.197 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2Q5NjQ5ZTMwYjIwMWQzNTkyZmQ2YjczYTRlMWMwMWFNse/e: ]] 00:23:15.197 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2Q5NjQ5ZTMwYjIwMWQzNTkyZmQ2YjczYTRlMWMwMWFNse/e: 00:23:15.197 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:23:15.197 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:15.197 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:15.197 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:15.197 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:15.197 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:15.197 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:15.197 14:57:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.197 14:57:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.197 14:57:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.197 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:15.197 14:57:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:15.197 14:57:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:15.197 14:57:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:15.197 14:57:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:15.197 14:57:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:15.197 14:57:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:15.197 14:57:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:15.197 14:57:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:15.197 14:57:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:15.197 14:57:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:15.197 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:15.197 14:57:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.197 14:57:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.455 nvme0n1 00:23:15.455 14:57:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.455 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:15.455 14:57:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.455 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:15.455 14:57:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.455 14:57:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.455 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:15.455 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:15.455 14:57:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.455 14:57:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.455 14:57:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.455 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:15.455 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:23:15.455 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:15.455 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:15.455 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:15.455 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:15.714 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjc4NDhhMDRhM2FkYjFlNzEzMmIyOTliNmFhZWZiOGM3Zjc2ZDcyNmRhOThjZGJiZjY5OGEzMTlkYjY5OWRmNE6bBqE=: 00:23:15.714 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:15.714 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:15.714 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:15.714 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjc4NDhhMDRhM2FkYjFlNzEzMmIyOTliNmFhZWZiOGM3Zjc2ZDcyNmRhOThjZGJiZjY5OGEzMTlkYjY5OWRmNE6bBqE=: 00:23:15.714 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:15.714 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:23:15.714 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:15.714 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:15.714 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:15.714 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:15.714 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:15.714 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:15.714 14:57:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.714 14:57:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.714 14:57:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.714 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:15.714 14:57:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:15.714 14:57:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:15.714 14:57:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:15.714 14:57:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:15.714 14:57:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:15.714 14:57:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:15.714 14:57:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:15.714 14:57:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:15.714 14:57:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:15.714 14:57:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:15.714 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:15.714 14:57:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.714 14:57:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.714 nvme0n1 00:23:15.714 14:57:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.714 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:15.714 14:57:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.714 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:15.714 14:57:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.714 14:57:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.714 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:15.714 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:15.714 14:57:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.714 14:57:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.972 14:57:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.972 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:15.972 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:15.972 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:23:15.972 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:15.972 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:15.972 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:15.972 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:15.972 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDhmNDI5ODI1OTczYWRiMTMwNWRhZWY5Y2UyNTAzOTL6lXmu: 00:23:15.972 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDUyOTU3ODEzNTIwZDNlMzdkOWFkMTFhZDk0ZTI0MzhmNzM0MmRiYjk1YTNmNGYyMGEyNjA4Y2YyZmMzNTM2MTp2VGQ=: 00:23:15.972 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:15.972 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:15.972 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDhmNDI5ODI1OTczYWRiMTMwNWRhZWY5Y2UyNTAzOTL6lXmu: 00:23:15.972 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDUyOTU3ODEzNTIwZDNlMzdkOWFkMTFhZDk0ZTI0MzhmNzM0MmRiYjk1YTNmNGYyMGEyNjA4Y2YyZmMzNTM2MTp2VGQ=: ]] 00:23:15.972 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDUyOTU3ODEzNTIwZDNlMzdkOWFkMTFhZDk0ZTI0MzhmNzM0MmRiYjk1YTNmNGYyMGEyNjA4Y2YyZmMzNTM2MTp2VGQ=: 00:23:15.972 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:23:15.972 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:15.972 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:15.972 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:15.972 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:15.972 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:15.972 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:15.972 14:57:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.972 14:57:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.972 14:57:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.972 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:15.972 14:57:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:15.972 14:57:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:15.972 14:57:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:15.972 14:57:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:15.972 14:57:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:15.972 14:57:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:15.972 14:57:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:15.972 14:57:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:15.972 14:57:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:15.972 14:57:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:15.972 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:15.972 14:57:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.972 14:57:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.231 nvme0n1 00:23:16.231 14:57:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.231 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:16.231 14:57:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.231 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:16.231 14:57:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.231 14:57:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.231 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.231 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:16.231 14:57:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.231 14:57:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.231 14:57:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.231 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:16.231 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:23:16.231 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:16.231 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:16.231 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:16.231 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:16.231 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGRkMmE5NWUzYTBjMDg5MjRmMTk5YzEwNzEyZTkwNTc4ZGFiMWU3NDYzODNmMzgxtqVLYw==: 00:23:16.231 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTIzNDhkYjdlMWYwNTk0YmQ3NDc1M2NmNGQ0NDk5ZmY5NTNkNWM2YWFkM2ZhZGU2TE1zGw==: 00:23:16.231 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:16.231 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:16.231 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGRkMmE5NWUzYTBjMDg5MjRmMTk5YzEwNzEyZTkwNTc4ZGFiMWU3NDYzODNmMzgxtqVLYw==: 00:23:16.231 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTIzNDhkYjdlMWYwNTk0YmQ3NDc1M2NmNGQ0NDk5ZmY5NTNkNWM2YWFkM2ZhZGU2TE1zGw==: ]] 00:23:16.231 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTIzNDhkYjdlMWYwNTk0YmQ3NDc1M2NmNGQ0NDk5ZmY5NTNkNWM2YWFkM2ZhZGU2TE1zGw==: 00:23:16.231 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:23:16.231 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:16.231 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:16.231 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:16.231 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:16.231 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:16.231 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:16.231 14:57:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.231 14:57:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.231 14:57:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.231 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:16.231 14:57:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:16.231 14:57:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:16.231 14:57:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:16.231 14:57:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:16.231 14:57:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:16.231 14:57:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:16.231 14:57:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:16.231 14:57:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:16.231 14:57:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:16.231 14:57:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:16.231 14:57:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:16.231 14:57:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.231 14:57:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.490 nvme0n1 00:23:16.490 14:57:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.490 14:57:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:16.490 14:57:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:16.490 14:57:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.490 14:57:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.490 14:57:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.490 14:57:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.490 14:57:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:16.490 14:57:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.490 14:57:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.490 14:57:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.490 14:57:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:16.490 14:57:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:23:16.490 14:57:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:16.490 14:57:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:16.490 14:57:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:16.490 14:57:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:16.490 14:57:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWIyZjA5NDZmZmVkMGVjMzM1M2QzMTcxNmFjNjBhODgBCIpz: 00:23:16.490 14:57:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmVmNDdkOTNjMjI1ZTliNDk1OGRjM2E5YTY3OWQxNzFpLeuz: 00:23:16.490 14:57:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:16.490 14:57:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:16.491 14:57:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWIyZjA5NDZmZmVkMGVjMzM1M2QzMTcxNmFjNjBhODgBCIpz: 00:23:16.491 14:57:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmVmNDdkOTNjMjI1ZTliNDk1OGRjM2E5YTY3OWQxNzFpLeuz: ]] 00:23:16.491 14:57:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmVmNDdkOTNjMjI1ZTliNDk1OGRjM2E5YTY3OWQxNzFpLeuz: 00:23:16.491 14:57:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:23:16.491 14:57:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:16.491 14:57:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:16.491 14:57:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:16.491 14:57:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:16.491 14:57:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:16.491 14:57:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:16.491 14:57:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.491 14:57:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.491 14:57:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.491 14:57:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:16.491 14:57:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:16.491 14:57:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:16.491 14:57:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:16.491 14:57:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:16.491 14:57:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:16.491 14:57:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:16.491 14:57:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:16.491 14:57:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:16.491 14:57:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:16.491 14:57:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:16.491 14:57:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:16.491 14:57:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.491 14:57:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.749 nvme0n1 00:23:16.749 14:57:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.749 14:57:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:16.749 14:57:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.749 14:57:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.749 14:57:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:16.749 14:57:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.749 14:57:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.749 14:57:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:16.749 14:57:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.749 14:57:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.749 14:57:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.749 14:57:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:16.749 14:57:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:23:16.749 14:57:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:16.749 14:57:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:16.749 14:57:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:16.749 14:57:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:16.749 14:57:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzcyYjg5ZWFhOWZlNDgzMzgxMDRiZjhmYWI1NzYwZTE4ZmI0MDJjMWZmOTFiMDAw7SZ+BA==: 00:23:16.749 14:57:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2Q5NjQ5ZTMwYjIwMWQzNTkyZmQ2YjczYTRlMWMwMWFNse/e: 00:23:16.749 14:57:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:16.749 14:57:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:16.750 14:57:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzcyYjg5ZWFhOWZlNDgzMzgxMDRiZjhmYWI1NzYwZTE4ZmI0MDJjMWZmOTFiMDAw7SZ+BA==: 00:23:16.750 14:57:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2Q5NjQ5ZTMwYjIwMWQzNTkyZmQ2YjczYTRlMWMwMWFNse/e: ]] 00:23:16.750 14:57:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2Q5NjQ5ZTMwYjIwMWQzNTkyZmQ2YjczYTRlMWMwMWFNse/e: 00:23:16.750 14:57:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:23:16.750 14:57:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:16.750 14:57:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:16.750 14:57:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:16.750 14:57:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:16.750 14:57:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:16.750 14:57:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:16.750 14:57:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.750 14:57:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.750 14:57:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.750 14:57:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:16.750 14:57:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:16.750 14:57:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:16.750 14:57:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:16.750 14:57:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:16.750 14:57:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:16.750 14:57:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:16.750 14:57:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:16.750 14:57:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:16.750 14:57:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:16.750 14:57:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:16.750 14:57:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:16.750 14:57:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.750 14:57:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.008 nvme0n1 00:23:17.008 14:57:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.008 14:57:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:17.008 14:57:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:17.008 14:57:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.008 14:57:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.008 14:57:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.008 14:57:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.008 14:57:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:17.008 14:57:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.008 14:57:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.008 14:57:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.008 14:57:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:17.008 14:57:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:23:17.008 14:57:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:17.008 14:57:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:17.008 14:57:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:17.008 14:57:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:17.008 14:57:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjc4NDhhMDRhM2FkYjFlNzEzMmIyOTliNmFhZWZiOGM3Zjc2ZDcyNmRhOThjZGJiZjY5OGEzMTlkYjY5OWRmNE6bBqE=: 00:23:17.008 14:57:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:17.008 14:57:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:17.008 14:57:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:17.008 14:57:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjc4NDhhMDRhM2FkYjFlNzEzMmIyOTliNmFhZWZiOGM3Zjc2ZDcyNmRhOThjZGJiZjY5OGEzMTlkYjY5OWRmNE6bBqE=: 00:23:17.008 14:57:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:17.009 14:57:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:23:17.009 14:57:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:17.009 14:57:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:17.009 14:57:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:17.009 14:57:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:17.009 14:57:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:17.009 14:57:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:17.009 14:57:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.009 14:57:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.009 14:57:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.009 14:57:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:17.009 14:57:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:17.009 14:57:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:17.009 14:57:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:17.009 14:57:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:17.009 14:57:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:17.009 14:57:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:17.009 14:57:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:17.009 14:57:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:17.009 14:57:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:17.009 14:57:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:17.009 14:57:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:17.009 14:57:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.009 14:57:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.289 nvme0n1 00:23:17.289 14:57:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.289 14:57:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:17.289 14:57:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.289 14:57:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:17.289 14:57:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.289 14:57:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.289 14:57:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.289 14:57:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:17.289 14:57:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.289 14:57:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.548 14:57:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.548 14:57:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:17.548 14:57:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:17.548 14:57:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:23:17.548 14:57:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:17.548 14:57:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:17.548 14:57:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:17.548 14:57:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:17.548 14:57:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDhmNDI5ODI1OTczYWRiMTMwNWRhZWY5Y2UyNTAzOTL6lXmu: 00:23:17.548 14:57:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDUyOTU3ODEzNTIwZDNlMzdkOWFkMTFhZDk0ZTI0MzhmNzM0MmRiYjk1YTNmNGYyMGEyNjA4Y2YyZmMzNTM2MTp2VGQ=: 00:23:17.548 14:57:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:17.548 14:57:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:17.548 14:57:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDhmNDI5ODI1OTczYWRiMTMwNWRhZWY5Y2UyNTAzOTL6lXmu: 00:23:17.548 14:57:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDUyOTU3ODEzNTIwZDNlMzdkOWFkMTFhZDk0ZTI0MzhmNzM0MmRiYjk1YTNmNGYyMGEyNjA4Y2YyZmMzNTM2MTp2VGQ=: ]] 00:23:17.548 14:57:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDUyOTU3ODEzNTIwZDNlMzdkOWFkMTFhZDk0ZTI0MzhmNzM0MmRiYjk1YTNmNGYyMGEyNjA4Y2YyZmMzNTM2MTp2VGQ=: 00:23:17.548 14:57:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:23:17.548 14:57:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:17.548 14:57:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:17.548 14:57:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:17.548 14:57:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:17.548 14:57:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:17.548 14:57:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:17.548 14:57:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.548 14:57:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.548 14:57:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.548 14:57:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:17.548 14:57:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:17.548 14:57:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:17.548 14:57:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:17.548 14:57:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:17.548 14:57:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:17.548 14:57:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:17.548 14:57:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:17.548 14:57:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:17.548 14:57:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:17.548 14:57:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:17.548 14:57:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:17.548 14:57:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.548 14:57:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.807 nvme0n1 00:23:17.807 14:57:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.807 14:57:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:17.807 14:57:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:17.807 14:57:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.807 14:57:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.807 14:57:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.807 14:57:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.807 14:57:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:17.807 14:57:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.807 14:57:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.807 14:57:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.807 14:57:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:17.807 14:57:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:23:17.807 14:57:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:17.807 14:57:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:17.807 14:57:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:17.807 14:57:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:17.807 14:57:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGRkMmE5NWUzYTBjMDg5MjRmMTk5YzEwNzEyZTkwNTc4ZGFiMWU3NDYzODNmMzgxtqVLYw==: 00:23:17.807 14:57:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTIzNDhkYjdlMWYwNTk0YmQ3NDc1M2NmNGQ0NDk5ZmY5NTNkNWM2YWFkM2ZhZGU2TE1zGw==: 00:23:17.807 14:57:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:17.807 14:57:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:17.807 14:57:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGRkMmE5NWUzYTBjMDg5MjRmMTk5YzEwNzEyZTkwNTc4ZGFiMWU3NDYzODNmMzgxtqVLYw==: 00:23:17.807 14:57:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTIzNDhkYjdlMWYwNTk0YmQ3NDc1M2NmNGQ0NDk5ZmY5NTNkNWM2YWFkM2ZhZGU2TE1zGw==: ]] 00:23:17.807 14:57:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTIzNDhkYjdlMWYwNTk0YmQ3NDc1M2NmNGQ0NDk5ZmY5NTNkNWM2YWFkM2ZhZGU2TE1zGw==: 00:23:17.807 14:57:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:23:17.807 14:57:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:17.807 14:57:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:17.807 14:57:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:17.807 14:57:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:17.807 14:57:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:17.807 14:57:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:17.807 14:57:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.807 14:57:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.807 14:57:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.807 14:57:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:17.807 14:57:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:17.807 14:57:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:17.807 14:57:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:17.807 14:57:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:17.807 14:57:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:17.807 14:57:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:17.807 14:57:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:17.807 14:57:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:17.807 14:57:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:17.807 14:57:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:17.807 14:57:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:17.807 14:57:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.807 14:57:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.066 nvme0n1 00:23:18.066 14:57:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.066 14:57:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:18.066 14:57:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:18.066 14:57:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.066 14:57:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.066 14:57:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.066 14:57:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:18.066 14:57:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:18.066 14:57:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.066 14:57:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.324 14:57:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.324 14:57:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:18.324 14:57:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:23:18.324 14:57:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:18.324 14:57:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:18.324 14:57:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:18.324 14:57:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:18.324 14:57:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWIyZjA5NDZmZmVkMGVjMzM1M2QzMTcxNmFjNjBhODgBCIpz: 00:23:18.324 14:57:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmVmNDdkOTNjMjI1ZTliNDk1OGRjM2E5YTY3OWQxNzFpLeuz: 00:23:18.324 14:57:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:18.324 14:57:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:18.324 14:57:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWIyZjA5NDZmZmVkMGVjMzM1M2QzMTcxNmFjNjBhODgBCIpz: 00:23:18.324 14:57:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmVmNDdkOTNjMjI1ZTliNDk1OGRjM2E5YTY3OWQxNzFpLeuz: ]] 00:23:18.324 14:57:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmVmNDdkOTNjMjI1ZTliNDk1OGRjM2E5YTY3OWQxNzFpLeuz: 00:23:18.324 14:57:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:23:18.324 14:57:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:18.324 14:57:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:18.324 14:57:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:18.324 14:57:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:18.324 14:57:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:18.324 14:57:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:18.324 14:57:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.324 14:57:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.324 14:57:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.324 14:57:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:18.324 14:57:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:18.324 14:57:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:18.324 14:57:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:18.324 14:57:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:18.324 14:57:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:18.324 14:57:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:18.324 14:57:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:18.324 14:57:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:18.324 14:57:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:18.324 14:57:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:18.324 14:57:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:18.324 14:57:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.324 14:57:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.583 nvme0n1 00:23:18.583 14:57:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.583 14:57:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:18.583 14:57:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:18.583 14:57:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.583 14:57:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.583 14:57:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.583 14:57:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:18.583 14:57:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:18.583 14:57:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.583 14:57:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.583 14:57:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.583 14:57:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:18.583 14:57:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:23:18.583 14:57:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:18.583 14:57:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:18.583 14:57:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:18.583 14:57:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:18.583 14:57:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzcyYjg5ZWFhOWZlNDgzMzgxMDRiZjhmYWI1NzYwZTE4ZmI0MDJjMWZmOTFiMDAw7SZ+BA==: 00:23:18.583 14:57:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2Q5NjQ5ZTMwYjIwMWQzNTkyZmQ2YjczYTRlMWMwMWFNse/e: 00:23:18.583 14:57:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:18.583 14:57:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:18.583 14:57:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzcyYjg5ZWFhOWZlNDgzMzgxMDRiZjhmYWI1NzYwZTE4ZmI0MDJjMWZmOTFiMDAw7SZ+BA==: 00:23:18.583 14:57:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2Q5NjQ5ZTMwYjIwMWQzNTkyZmQ2YjczYTRlMWMwMWFNse/e: ]] 00:23:18.583 14:57:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2Q5NjQ5ZTMwYjIwMWQzNTkyZmQ2YjczYTRlMWMwMWFNse/e: 00:23:18.583 14:57:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:23:18.583 14:57:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:18.583 14:57:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:18.583 14:57:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:18.583 14:57:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:18.583 14:57:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:18.583 14:57:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:18.583 14:57:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.583 14:57:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.583 14:57:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.583 14:57:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:18.583 14:57:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:18.583 14:57:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:18.583 14:57:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:18.583 14:57:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:18.583 14:57:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:18.583 14:57:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:18.583 14:57:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:18.583 14:57:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:18.583 14:57:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:18.583 14:57:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:18.583 14:57:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:18.583 14:57:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.583 14:57:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.842 nvme0n1 00:23:18.842 14:57:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.842 14:57:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:18.842 14:57:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:18.842 14:57:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.842 14:57:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.842 14:57:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.842 14:57:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:18.842 14:57:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:18.842 14:57:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.842 14:57:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.842 14:57:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.842 14:57:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:18.842 14:57:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:23:18.842 14:57:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:18.842 14:57:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:18.842 14:57:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:18.842 14:57:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:18.842 14:57:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjc4NDhhMDRhM2FkYjFlNzEzMmIyOTliNmFhZWZiOGM3Zjc2ZDcyNmRhOThjZGJiZjY5OGEzMTlkYjY5OWRmNE6bBqE=: 00:23:18.842 14:57:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:18.842 14:57:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:18.842 14:57:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:18.842 14:57:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjc4NDhhMDRhM2FkYjFlNzEzMmIyOTliNmFhZWZiOGM3Zjc2ZDcyNmRhOThjZGJiZjY5OGEzMTlkYjY5OWRmNE6bBqE=: 00:23:18.842 14:57:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:18.842 14:57:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:23:18.842 14:57:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:18.842 14:57:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:18.842 14:57:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:18.842 14:57:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:18.842 14:57:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:18.842 14:57:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:18.842 14:57:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.842 14:57:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.842 14:57:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.842 14:57:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:18.842 14:57:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:18.842 14:57:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:18.842 14:57:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:18.842 14:57:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:18.842 14:57:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:18.842 14:57:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:18.842 14:57:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:18.842 14:57:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:18.842 14:57:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:18.842 14:57:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:18.842 14:57:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:18.842 14:57:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.842 14:57:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.409 nvme0n1 00:23:19.409 14:57:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.409 14:57:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:19.409 14:57:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:19.409 14:57:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.409 14:57:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.409 14:57:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.409 14:57:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:19.409 14:57:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:19.409 14:57:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.409 14:57:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.409 14:57:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.409 14:57:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:19.409 14:57:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:19.409 14:57:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:23:19.409 14:57:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:19.409 14:57:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:19.409 14:57:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:19.409 14:57:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:19.409 14:57:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDhmNDI5ODI1OTczYWRiMTMwNWRhZWY5Y2UyNTAzOTL6lXmu: 00:23:19.409 14:57:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDUyOTU3ODEzNTIwZDNlMzdkOWFkMTFhZDk0ZTI0MzhmNzM0MmRiYjk1YTNmNGYyMGEyNjA4Y2YyZmMzNTM2MTp2VGQ=: 00:23:19.409 14:57:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:19.409 14:57:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:19.409 14:57:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDhmNDI5ODI1OTczYWRiMTMwNWRhZWY5Y2UyNTAzOTL6lXmu: 00:23:19.409 14:57:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDUyOTU3ODEzNTIwZDNlMzdkOWFkMTFhZDk0ZTI0MzhmNzM0MmRiYjk1YTNmNGYyMGEyNjA4Y2YyZmMzNTM2MTp2VGQ=: ]] 00:23:19.409 14:57:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDUyOTU3ODEzNTIwZDNlMzdkOWFkMTFhZDk0ZTI0MzhmNzM0MmRiYjk1YTNmNGYyMGEyNjA4Y2YyZmMzNTM2MTp2VGQ=: 00:23:19.409 14:57:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:23:19.409 14:57:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:19.409 14:57:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:19.409 14:57:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:19.409 14:57:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:19.409 14:57:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:19.409 14:57:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:19.409 14:57:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.409 14:57:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.409 14:57:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.409 14:57:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:19.409 14:57:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:19.409 14:57:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:19.409 14:57:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:19.409 14:57:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:19.409 14:57:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:19.409 14:57:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:19.409 14:57:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:19.409 14:57:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:19.409 14:57:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:19.409 14:57:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:19.409 14:57:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:19.409 14:57:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.409 14:57:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.667 nvme0n1 00:23:19.667 14:57:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.667 14:57:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:19.667 14:57:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:19.667 14:57:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.667 14:57:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.667 14:57:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.923 14:57:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:19.923 14:57:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:19.923 14:57:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.923 14:57:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.923 14:57:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.923 14:57:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:19.923 14:57:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:23:19.923 14:57:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:19.923 14:57:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:19.923 14:57:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:19.923 14:57:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:19.923 14:57:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGRkMmE5NWUzYTBjMDg5MjRmMTk5YzEwNzEyZTkwNTc4ZGFiMWU3NDYzODNmMzgxtqVLYw==: 00:23:19.923 14:57:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTIzNDhkYjdlMWYwNTk0YmQ3NDc1M2NmNGQ0NDk5ZmY5NTNkNWM2YWFkM2ZhZGU2TE1zGw==: 00:23:19.923 14:57:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:19.923 14:57:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:19.923 14:57:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGRkMmE5NWUzYTBjMDg5MjRmMTk5YzEwNzEyZTkwNTc4ZGFiMWU3NDYzODNmMzgxtqVLYw==: 00:23:19.924 14:57:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTIzNDhkYjdlMWYwNTk0YmQ3NDc1M2NmNGQ0NDk5ZmY5NTNkNWM2YWFkM2ZhZGU2TE1zGw==: ]] 00:23:19.924 14:57:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTIzNDhkYjdlMWYwNTk0YmQ3NDc1M2NmNGQ0NDk5ZmY5NTNkNWM2YWFkM2ZhZGU2TE1zGw==: 00:23:19.924 14:57:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:23:19.924 14:57:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:19.924 14:57:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:19.924 14:57:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:19.924 14:57:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:19.924 14:57:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:19.924 14:57:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:19.924 14:57:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.924 14:57:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.924 14:57:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.924 14:57:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:19.924 14:57:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:19.924 14:57:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:19.924 14:57:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:19.924 14:57:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:19.924 14:57:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:19.924 14:57:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:19.924 14:57:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:19.924 14:57:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:19.924 14:57:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:19.924 14:57:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:19.924 14:57:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:19.924 14:57:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.924 14:57:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.181 nvme0n1 00:23:20.181 14:57:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.181 14:57:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:20.181 14:57:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:20.181 14:57:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.181 14:57:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.181 14:57:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.440 14:57:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:20.440 14:57:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:20.440 14:57:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.440 14:57:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.440 14:57:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.440 14:57:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:20.440 14:57:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:23:20.440 14:57:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:20.440 14:57:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:20.440 14:57:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:20.440 14:57:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:20.440 14:57:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWIyZjA5NDZmZmVkMGVjMzM1M2QzMTcxNmFjNjBhODgBCIpz: 00:23:20.440 14:57:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmVmNDdkOTNjMjI1ZTliNDk1OGRjM2E5YTY3OWQxNzFpLeuz: 00:23:20.440 14:57:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:20.440 14:57:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:20.440 14:57:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWIyZjA5NDZmZmVkMGVjMzM1M2QzMTcxNmFjNjBhODgBCIpz: 00:23:20.440 14:57:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmVmNDdkOTNjMjI1ZTliNDk1OGRjM2E5YTY3OWQxNzFpLeuz: ]] 00:23:20.440 14:57:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmVmNDdkOTNjMjI1ZTliNDk1OGRjM2E5YTY3OWQxNzFpLeuz: 00:23:20.440 14:57:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:23:20.440 14:57:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:20.440 14:57:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:20.440 14:57:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:20.440 14:57:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:20.440 14:57:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:20.440 14:57:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:20.440 14:57:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.440 14:57:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.440 14:57:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.440 14:57:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:20.440 14:57:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:20.440 14:57:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:20.440 14:57:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:20.440 14:57:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:20.440 14:57:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:20.440 14:57:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:20.440 14:57:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:20.440 14:57:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:20.440 14:57:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:20.440 14:57:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:20.440 14:57:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:20.440 14:57:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.440 14:57:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.698 nvme0n1 00:23:20.698 14:57:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.698 14:57:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:20.698 14:57:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:20.698 14:57:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.698 14:57:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.698 14:57:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.957 14:57:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:20.957 14:57:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:20.957 14:57:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.957 14:57:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.957 14:57:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.957 14:57:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:20.957 14:57:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:23:20.957 14:57:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:20.957 14:57:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:20.957 14:57:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:20.957 14:57:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:20.957 14:57:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzcyYjg5ZWFhOWZlNDgzMzgxMDRiZjhmYWI1NzYwZTE4ZmI0MDJjMWZmOTFiMDAw7SZ+BA==: 00:23:20.957 14:57:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2Q5NjQ5ZTMwYjIwMWQzNTkyZmQ2YjczYTRlMWMwMWFNse/e: 00:23:20.957 14:57:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:20.957 14:57:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:20.957 14:57:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzcyYjg5ZWFhOWZlNDgzMzgxMDRiZjhmYWI1NzYwZTE4ZmI0MDJjMWZmOTFiMDAw7SZ+BA==: 00:23:20.957 14:57:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2Q5NjQ5ZTMwYjIwMWQzNTkyZmQ2YjczYTRlMWMwMWFNse/e: ]] 00:23:20.957 14:57:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2Q5NjQ5ZTMwYjIwMWQzNTkyZmQ2YjczYTRlMWMwMWFNse/e: 00:23:20.957 14:57:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:23:20.957 14:57:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:20.957 14:57:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:20.957 14:57:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:20.957 14:57:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:20.957 14:57:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:20.957 14:57:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:20.957 14:57:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.957 14:57:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.957 14:57:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.957 14:57:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:20.957 14:57:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:20.957 14:57:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:20.957 14:57:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:20.957 14:57:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:20.957 14:57:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:20.957 14:57:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:20.957 14:57:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:20.957 14:57:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:20.957 14:57:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:20.957 14:57:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:20.957 14:57:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:20.957 14:57:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.957 14:57:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.215 nvme0n1 00:23:21.215 14:57:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.215 14:57:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:21.215 14:57:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.215 14:57:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:21.215 14:57:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.215 14:57:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.215 14:57:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:21.215 14:57:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:21.215 14:57:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.215 14:57:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.473 14:57:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.473 14:57:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:21.473 14:57:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:23:21.473 14:57:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:21.473 14:57:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:21.473 14:57:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:21.473 14:57:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:21.473 14:57:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjc4NDhhMDRhM2FkYjFlNzEzMmIyOTliNmFhZWZiOGM3Zjc2ZDcyNmRhOThjZGJiZjY5OGEzMTlkYjY5OWRmNE6bBqE=: 00:23:21.473 14:57:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:21.473 14:57:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:21.473 14:57:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:21.473 14:57:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjc4NDhhMDRhM2FkYjFlNzEzMmIyOTliNmFhZWZiOGM3Zjc2ZDcyNmRhOThjZGJiZjY5OGEzMTlkYjY5OWRmNE6bBqE=: 00:23:21.473 14:57:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:21.473 14:57:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:23:21.473 14:57:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:21.473 14:57:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:21.473 14:57:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:21.473 14:57:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:21.473 14:57:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:21.473 14:57:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:21.473 14:57:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.473 14:57:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.473 14:57:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.473 14:57:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:21.473 14:57:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:21.473 14:57:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:21.473 14:57:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:21.473 14:57:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:21.473 14:57:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:21.473 14:57:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:21.473 14:57:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:21.473 14:57:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:21.473 14:57:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:21.473 14:57:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:21.473 14:57:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:21.473 14:57:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.473 14:57:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.785 nvme0n1 00:23:21.785 14:57:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.785 14:57:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:21.785 14:57:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:21.785 14:57:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.785 14:57:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.785 14:57:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.785 14:57:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:21.785 14:57:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:21.785 14:57:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.785 14:57:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.785 14:57:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.785 14:57:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:21.785 14:57:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:21.785 14:57:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:23:21.785 14:57:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:21.785 14:57:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:21.785 14:57:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:21.785 14:57:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:21.785 14:57:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDhmNDI5ODI1OTczYWRiMTMwNWRhZWY5Y2UyNTAzOTL6lXmu: 00:23:21.785 14:57:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDUyOTU3ODEzNTIwZDNlMzdkOWFkMTFhZDk0ZTI0MzhmNzM0MmRiYjk1YTNmNGYyMGEyNjA4Y2YyZmMzNTM2MTp2VGQ=: 00:23:21.785 14:57:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:21.785 14:57:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:21.785 14:57:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDhmNDI5ODI1OTczYWRiMTMwNWRhZWY5Y2UyNTAzOTL6lXmu: 00:23:21.785 14:57:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDUyOTU3ODEzNTIwZDNlMzdkOWFkMTFhZDk0ZTI0MzhmNzM0MmRiYjk1YTNmNGYyMGEyNjA4Y2YyZmMzNTM2MTp2VGQ=: ]] 00:23:21.785 14:57:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDUyOTU3ODEzNTIwZDNlMzdkOWFkMTFhZDk0ZTI0MzhmNzM0MmRiYjk1YTNmNGYyMGEyNjA4Y2YyZmMzNTM2MTp2VGQ=: 00:23:21.785 14:57:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:23:21.785 14:57:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:21.785 14:57:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:21.785 14:57:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:21.785 14:57:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:21.785 14:57:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:21.785 14:57:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:21.785 14:57:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.785 14:57:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.785 14:57:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.785 14:57:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:21.785 14:57:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:21.785 14:57:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:21.785 14:57:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:21.785 14:57:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:21.785 14:57:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:21.785 14:57:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:21.785 14:57:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:21.785 14:57:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:21.785 14:57:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:21.785 14:57:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:21.785 14:57:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:21.785 14:57:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.785 14:57:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.468 nvme0n1 00:23:22.468 14:57:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.468 14:57:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:22.468 14:57:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:22.468 14:57:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.468 14:57:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.468 14:57:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.468 14:57:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:22.468 14:57:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:22.468 14:57:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.468 14:57:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.468 14:57:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.468 14:57:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:22.468 14:57:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:23:22.468 14:57:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:22.468 14:57:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:22.468 14:57:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:22.468 14:57:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:22.468 14:57:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGRkMmE5NWUzYTBjMDg5MjRmMTk5YzEwNzEyZTkwNTc4ZGFiMWU3NDYzODNmMzgxtqVLYw==: 00:23:22.468 14:57:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTIzNDhkYjdlMWYwNTk0YmQ3NDc1M2NmNGQ0NDk5ZmY5NTNkNWM2YWFkM2ZhZGU2TE1zGw==: 00:23:22.468 14:57:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:22.468 14:57:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:22.468 14:57:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGRkMmE5NWUzYTBjMDg5MjRmMTk5YzEwNzEyZTkwNTc4ZGFiMWU3NDYzODNmMzgxtqVLYw==: 00:23:22.468 14:57:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTIzNDhkYjdlMWYwNTk0YmQ3NDc1M2NmNGQ0NDk5ZmY5NTNkNWM2YWFkM2ZhZGU2TE1zGw==: ]] 00:23:22.468 14:57:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTIzNDhkYjdlMWYwNTk0YmQ3NDc1M2NmNGQ0NDk5ZmY5NTNkNWM2YWFkM2ZhZGU2TE1zGw==: 00:23:22.468 14:57:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:23:22.468 14:57:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:22.468 14:57:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:22.468 14:57:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:22.468 14:57:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:22.468 14:57:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:22.468 14:57:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:22.468 14:57:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.468 14:57:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.468 14:57:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.468 14:57:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:22.468 14:57:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:22.468 14:57:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:22.468 14:57:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:22.468 14:57:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:22.468 14:57:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:22.468 14:57:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:22.468 14:57:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:22.468 14:57:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:22.468 14:57:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:22.468 14:57:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:22.726 14:57:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:22.726 14:57:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.726 14:57:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.292 nvme0n1 00:23:23.292 14:57:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.292 14:57:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:23.292 14:57:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:23.292 14:57:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.292 14:57:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.292 14:57:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.292 14:57:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:23.292 14:57:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:23.292 14:57:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.292 14:57:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.292 14:57:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.292 14:57:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:23.292 14:57:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:23:23.292 14:57:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:23.292 14:57:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:23.292 14:57:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:23.292 14:57:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:23.292 14:57:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWIyZjA5NDZmZmVkMGVjMzM1M2QzMTcxNmFjNjBhODgBCIpz: 00:23:23.292 14:57:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmVmNDdkOTNjMjI1ZTliNDk1OGRjM2E5YTY3OWQxNzFpLeuz: 00:23:23.292 14:57:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:23.292 14:57:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:23.292 14:57:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWIyZjA5NDZmZmVkMGVjMzM1M2QzMTcxNmFjNjBhODgBCIpz: 00:23:23.292 14:57:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmVmNDdkOTNjMjI1ZTliNDk1OGRjM2E5YTY3OWQxNzFpLeuz: ]] 00:23:23.292 14:57:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmVmNDdkOTNjMjI1ZTliNDk1OGRjM2E5YTY3OWQxNzFpLeuz: 00:23:23.292 14:57:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:23:23.292 14:57:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:23.292 14:57:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:23.292 14:57:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:23.292 14:57:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:23.292 14:57:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:23.292 14:57:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:23.292 14:57:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.292 14:57:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.292 14:57:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.292 14:57:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:23.292 14:57:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:23.292 14:57:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:23.292 14:57:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:23.292 14:57:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:23.292 14:57:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:23.292 14:57:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:23.292 14:57:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:23.292 14:57:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:23.292 14:57:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:23.292 14:57:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:23.292 14:57:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:23.292 14:57:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.292 14:57:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.858 nvme0n1 00:23:23.858 14:57:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.858 14:57:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:23.858 14:57:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.858 14:57:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:23.858 14:57:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.858 14:57:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.858 14:57:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:23.858 14:57:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:23.858 14:57:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.858 14:57:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.858 14:57:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.858 14:57:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:23.858 14:57:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:23:23.858 14:57:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:23.858 14:57:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:23.858 14:57:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:23.858 14:57:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:23.858 14:57:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzcyYjg5ZWFhOWZlNDgzMzgxMDRiZjhmYWI1NzYwZTE4ZmI0MDJjMWZmOTFiMDAw7SZ+BA==: 00:23:23.858 14:57:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2Q5NjQ5ZTMwYjIwMWQzNTkyZmQ2YjczYTRlMWMwMWFNse/e: 00:23:23.858 14:57:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:23.858 14:57:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:23.858 14:57:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzcyYjg5ZWFhOWZlNDgzMzgxMDRiZjhmYWI1NzYwZTE4ZmI0MDJjMWZmOTFiMDAw7SZ+BA==: 00:23:23.858 14:57:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2Q5NjQ5ZTMwYjIwMWQzNTkyZmQ2YjczYTRlMWMwMWFNse/e: ]] 00:23:23.858 14:57:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2Q5NjQ5ZTMwYjIwMWQzNTkyZmQ2YjczYTRlMWMwMWFNse/e: 00:23:23.858 14:57:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:23:23.858 14:57:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:23.858 14:57:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:23.858 14:57:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:23.858 14:57:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:23.859 14:57:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:23.859 14:57:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:23.859 14:57:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.859 14:57:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.859 14:57:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.859 14:57:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:23.859 14:57:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:23.859 14:57:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:23.859 14:57:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:23.859 14:57:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:23.859 14:57:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:23.859 14:57:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:23.859 14:57:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:23.859 14:57:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:23.859 14:57:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:23.859 14:57:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:24.117 14:57:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:24.117 14:57:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.117 14:57:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.684 nvme0n1 00:23:24.684 14:57:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.684 14:57:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:24.684 14:57:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:24.684 14:57:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.684 14:57:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.684 14:57:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.684 14:57:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:24.684 14:57:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:24.684 14:57:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.684 14:57:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.684 14:57:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.684 14:57:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:24.684 14:57:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:23:24.684 14:57:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:24.684 14:57:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:24.684 14:57:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:24.684 14:57:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:24.684 14:57:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjc4NDhhMDRhM2FkYjFlNzEzMmIyOTliNmFhZWZiOGM3Zjc2ZDcyNmRhOThjZGJiZjY5OGEzMTlkYjY5OWRmNE6bBqE=: 00:23:24.684 14:57:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:24.684 14:57:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:24.684 14:57:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:24.684 14:57:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjc4NDhhMDRhM2FkYjFlNzEzMmIyOTliNmFhZWZiOGM3Zjc2ZDcyNmRhOThjZGJiZjY5OGEzMTlkYjY5OWRmNE6bBqE=: 00:23:24.684 14:57:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:24.684 14:57:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:23:24.684 14:57:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:24.684 14:57:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:24.684 14:57:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:24.684 14:57:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:24.684 14:57:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:24.684 14:57:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:24.684 14:57:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.684 14:57:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.684 14:57:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.684 14:57:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:24.684 14:57:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:24.684 14:57:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:24.684 14:57:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:24.684 14:57:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:24.684 14:57:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:24.684 14:57:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:24.684 14:57:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:24.684 14:57:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:24.684 14:57:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:24.684 14:57:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:24.684 14:57:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:24.684 14:57:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.684 14:57:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.250 nvme0n1 00:23:25.250 14:57:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.250 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:25.250 14:57:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.250 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:25.250 14:57:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.250 14:57:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.250 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:25.250 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:25.250 14:57:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.250 14:57:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.250 14:57:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.251 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:25.251 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:25.251 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:25.251 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:23:25.251 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:25.251 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:25.251 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:25.251 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:25.251 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDhmNDI5ODI1OTczYWRiMTMwNWRhZWY5Y2UyNTAzOTL6lXmu: 00:23:25.251 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDUyOTU3ODEzNTIwZDNlMzdkOWFkMTFhZDk0ZTI0MzhmNzM0MmRiYjk1YTNmNGYyMGEyNjA4Y2YyZmMzNTM2MTp2VGQ=: 00:23:25.251 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:25.251 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:25.251 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDhmNDI5ODI1OTczYWRiMTMwNWRhZWY5Y2UyNTAzOTL6lXmu: 00:23:25.251 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDUyOTU3ODEzNTIwZDNlMzdkOWFkMTFhZDk0ZTI0MzhmNzM0MmRiYjk1YTNmNGYyMGEyNjA4Y2YyZmMzNTM2MTp2VGQ=: ]] 00:23:25.251 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDUyOTU3ODEzNTIwZDNlMzdkOWFkMTFhZDk0ZTI0MzhmNzM0MmRiYjk1YTNmNGYyMGEyNjA4Y2YyZmMzNTM2MTp2VGQ=: 00:23:25.251 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:23:25.251 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:25.251 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:25.251 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:25.251 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:25.251 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:25.251 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:25.251 14:57:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.251 14:57:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.251 14:57:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.509 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:25.509 14:57:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:25.509 14:57:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:25.509 14:57:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:25.509 14:57:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:25.509 14:57:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:25.509 14:57:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:25.509 14:57:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:25.509 14:57:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:25.509 14:57:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:25.509 14:57:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:25.509 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:25.509 14:57:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.509 14:57:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.509 nvme0n1 00:23:25.509 14:57:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.509 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:25.509 14:57:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.509 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:25.509 14:57:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.509 14:57:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.509 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:25.509 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:25.509 14:57:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.509 14:57:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.767 14:57:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.767 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:25.767 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:23:25.767 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:25.767 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:25.767 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:25.767 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:25.767 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGRkMmE5NWUzYTBjMDg5MjRmMTk5YzEwNzEyZTkwNTc4ZGFiMWU3NDYzODNmMzgxtqVLYw==: 00:23:25.767 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTIzNDhkYjdlMWYwNTk0YmQ3NDc1M2NmNGQ0NDk5ZmY5NTNkNWM2YWFkM2ZhZGU2TE1zGw==: 00:23:25.767 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:25.767 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:25.767 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGRkMmE5NWUzYTBjMDg5MjRmMTk5YzEwNzEyZTkwNTc4ZGFiMWU3NDYzODNmMzgxtqVLYw==: 00:23:25.767 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTIzNDhkYjdlMWYwNTk0YmQ3NDc1M2NmNGQ0NDk5ZmY5NTNkNWM2YWFkM2ZhZGU2TE1zGw==: ]] 00:23:25.767 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTIzNDhkYjdlMWYwNTk0YmQ3NDc1M2NmNGQ0NDk5ZmY5NTNkNWM2YWFkM2ZhZGU2TE1zGw==: 00:23:25.767 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:23:25.767 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:25.767 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:25.768 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:25.768 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:25.768 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:25.768 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:25.768 14:57:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.768 14:57:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.768 14:57:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.768 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:25.768 14:57:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:25.768 14:57:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:25.768 14:57:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:25.768 14:57:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:25.768 14:57:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:25.768 14:57:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:25.768 14:57:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:25.768 14:57:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:25.768 14:57:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:25.768 14:57:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:25.768 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:25.768 14:57:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.768 14:57:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.768 nvme0n1 00:23:25.768 14:57:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.768 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:25.768 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:25.768 14:57:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.768 14:57:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.768 14:57:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.026 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:26.026 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:26.026 14:57:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.026 14:57:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.026 14:57:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.026 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:26.026 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:23:26.026 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:26.026 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:26.026 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:26.026 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:26.026 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWIyZjA5NDZmZmVkMGVjMzM1M2QzMTcxNmFjNjBhODgBCIpz: 00:23:26.026 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmVmNDdkOTNjMjI1ZTliNDk1OGRjM2E5YTY3OWQxNzFpLeuz: 00:23:26.026 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:26.026 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:26.026 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWIyZjA5NDZmZmVkMGVjMzM1M2QzMTcxNmFjNjBhODgBCIpz: 00:23:26.026 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmVmNDdkOTNjMjI1ZTliNDk1OGRjM2E5YTY3OWQxNzFpLeuz: ]] 00:23:26.026 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmVmNDdkOTNjMjI1ZTliNDk1OGRjM2E5YTY3OWQxNzFpLeuz: 00:23:26.026 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:23:26.026 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:26.026 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:26.026 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:26.026 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:26.026 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:26.026 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:26.026 14:57:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.026 14:57:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.026 14:57:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.026 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:26.026 14:57:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:26.026 14:57:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:26.026 14:57:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:26.026 14:57:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:26.026 14:57:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:26.026 14:57:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:26.026 14:57:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:26.026 14:57:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:26.026 14:57:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:26.026 14:57:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:26.026 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:26.026 14:57:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.026 14:57:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.026 nvme0n1 00:23:26.026 14:57:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.026 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:26.026 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:26.026 14:57:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.026 14:57:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.026 14:57:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.284 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:26.284 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:26.284 14:57:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.284 14:57:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.284 14:57:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.284 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:26.284 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:23:26.284 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:26.284 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:26.284 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:26.284 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:26.284 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzcyYjg5ZWFhOWZlNDgzMzgxMDRiZjhmYWI1NzYwZTE4ZmI0MDJjMWZmOTFiMDAw7SZ+BA==: 00:23:26.284 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2Q5NjQ5ZTMwYjIwMWQzNTkyZmQ2YjczYTRlMWMwMWFNse/e: 00:23:26.284 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:26.284 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:26.284 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzcyYjg5ZWFhOWZlNDgzMzgxMDRiZjhmYWI1NzYwZTE4ZmI0MDJjMWZmOTFiMDAw7SZ+BA==: 00:23:26.284 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2Q5NjQ5ZTMwYjIwMWQzNTkyZmQ2YjczYTRlMWMwMWFNse/e: ]] 00:23:26.284 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2Q5NjQ5ZTMwYjIwMWQzNTkyZmQ2YjczYTRlMWMwMWFNse/e: 00:23:26.284 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:23:26.284 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:26.284 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:26.284 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:26.284 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:26.284 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:26.284 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:26.284 14:57:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.284 14:57:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.284 14:57:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.284 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:26.284 14:57:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:26.284 14:57:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:26.284 14:57:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:26.284 14:57:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:26.284 14:57:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:26.284 14:57:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:26.284 14:57:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:26.284 14:57:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:26.284 14:57:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:26.284 14:57:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:26.284 14:57:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:26.284 14:57:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.284 14:57:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.284 nvme0n1 00:23:26.284 14:58:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.284 14:58:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:26.284 14:58:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:26.284 14:58:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.284 14:58:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.543 14:58:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.543 14:58:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:26.543 14:58:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:26.543 14:58:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.543 14:58:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.543 14:58:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.543 14:58:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:26.543 14:58:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:23:26.543 14:58:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:26.543 14:58:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:26.543 14:58:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:26.543 14:58:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:26.543 14:58:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjc4NDhhMDRhM2FkYjFlNzEzMmIyOTliNmFhZWZiOGM3Zjc2ZDcyNmRhOThjZGJiZjY5OGEzMTlkYjY5OWRmNE6bBqE=: 00:23:26.543 14:58:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:26.543 14:58:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:26.543 14:58:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:26.543 14:58:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjc4NDhhMDRhM2FkYjFlNzEzMmIyOTliNmFhZWZiOGM3Zjc2ZDcyNmRhOThjZGJiZjY5OGEzMTlkYjY5OWRmNE6bBqE=: 00:23:26.543 14:58:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:26.543 14:58:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:23:26.543 14:58:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:26.543 14:58:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:26.543 14:58:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:26.543 14:58:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:26.543 14:58:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:26.543 14:58:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:26.543 14:58:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.543 14:58:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.543 14:58:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.543 14:58:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:26.543 14:58:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:26.543 14:58:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:26.543 14:58:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:26.543 14:58:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:26.543 14:58:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:26.543 14:58:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:26.543 14:58:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:26.543 14:58:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:26.543 14:58:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:26.543 14:58:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:26.543 14:58:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:26.543 14:58:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.543 14:58:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.801 nvme0n1 00:23:26.801 14:58:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.801 14:58:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:26.801 14:58:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:26.801 14:58:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.802 14:58:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.802 14:58:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.802 14:58:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:26.802 14:58:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:26.802 14:58:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.802 14:58:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.802 14:58:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.802 14:58:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:26.802 14:58:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:26.802 14:58:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:23:26.802 14:58:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:26.802 14:58:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:26.802 14:58:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:26.802 14:58:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:26.802 14:58:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDhmNDI5ODI1OTczYWRiMTMwNWRhZWY5Y2UyNTAzOTL6lXmu: 00:23:26.802 14:58:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDUyOTU3ODEzNTIwZDNlMzdkOWFkMTFhZDk0ZTI0MzhmNzM0MmRiYjk1YTNmNGYyMGEyNjA4Y2YyZmMzNTM2MTp2VGQ=: 00:23:26.802 14:58:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:26.802 14:58:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:26.802 14:58:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDhmNDI5ODI1OTczYWRiMTMwNWRhZWY5Y2UyNTAzOTL6lXmu: 00:23:26.802 14:58:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDUyOTU3ODEzNTIwZDNlMzdkOWFkMTFhZDk0ZTI0MzhmNzM0MmRiYjk1YTNmNGYyMGEyNjA4Y2YyZmMzNTM2MTp2VGQ=: ]] 00:23:26.802 14:58:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDUyOTU3ODEzNTIwZDNlMzdkOWFkMTFhZDk0ZTI0MzhmNzM0MmRiYjk1YTNmNGYyMGEyNjA4Y2YyZmMzNTM2MTp2VGQ=: 00:23:26.802 14:58:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:23:26.802 14:58:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:26.802 14:58:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:26.802 14:58:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:26.802 14:58:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:26.802 14:58:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:26.802 14:58:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:26.802 14:58:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.802 14:58:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.802 14:58:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.802 14:58:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:26.802 14:58:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:26.802 14:58:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:26.802 14:58:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:26.802 14:58:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:26.802 14:58:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:26.802 14:58:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:26.802 14:58:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:26.802 14:58:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:26.802 14:58:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:26.802 14:58:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:26.802 14:58:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:26.802 14:58:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.802 14:58:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.062 nvme0n1 00:23:27.062 14:58:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.062 14:58:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:27.062 14:58:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.062 14:58:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.062 14:58:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:27.062 14:58:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.062 14:58:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:27.062 14:58:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:27.062 14:58:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.062 14:58:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.062 14:58:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.062 14:58:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:27.062 14:58:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:23:27.062 14:58:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:27.062 14:58:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:27.062 14:58:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:27.062 14:58:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:27.062 14:58:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGRkMmE5NWUzYTBjMDg5MjRmMTk5YzEwNzEyZTkwNTc4ZGFiMWU3NDYzODNmMzgxtqVLYw==: 00:23:27.062 14:58:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTIzNDhkYjdlMWYwNTk0YmQ3NDc1M2NmNGQ0NDk5ZmY5NTNkNWM2YWFkM2ZhZGU2TE1zGw==: 00:23:27.062 14:58:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:27.062 14:58:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:27.062 14:58:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGRkMmE5NWUzYTBjMDg5MjRmMTk5YzEwNzEyZTkwNTc4ZGFiMWU3NDYzODNmMzgxtqVLYw==: 00:23:27.062 14:58:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTIzNDhkYjdlMWYwNTk0YmQ3NDc1M2NmNGQ0NDk5ZmY5NTNkNWM2YWFkM2ZhZGU2TE1zGw==: ]] 00:23:27.062 14:58:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTIzNDhkYjdlMWYwNTk0YmQ3NDc1M2NmNGQ0NDk5ZmY5NTNkNWM2YWFkM2ZhZGU2TE1zGw==: 00:23:27.062 14:58:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:23:27.062 14:58:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:27.062 14:58:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:27.062 14:58:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:27.062 14:58:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:27.062 14:58:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:27.062 14:58:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:27.062 14:58:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.062 14:58:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.062 14:58:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.062 14:58:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:27.062 14:58:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:27.062 14:58:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:27.062 14:58:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:27.062 14:58:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:27.062 14:58:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:27.062 14:58:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:27.062 14:58:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:27.062 14:58:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:27.062 14:58:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:27.062 14:58:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:27.062 14:58:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:27.062 14:58:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.062 14:58:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.321 nvme0n1 00:23:27.321 14:58:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.321 14:58:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:27.321 14:58:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:27.321 14:58:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.321 14:58:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.321 14:58:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.321 14:58:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:27.321 14:58:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:27.321 14:58:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.321 14:58:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.321 14:58:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.321 14:58:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:27.321 14:58:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:23:27.321 14:58:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:27.321 14:58:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:27.321 14:58:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:27.321 14:58:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:27.321 14:58:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWIyZjA5NDZmZmVkMGVjMzM1M2QzMTcxNmFjNjBhODgBCIpz: 00:23:27.321 14:58:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmVmNDdkOTNjMjI1ZTliNDk1OGRjM2E5YTY3OWQxNzFpLeuz: 00:23:27.321 14:58:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:27.321 14:58:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:27.321 14:58:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWIyZjA5NDZmZmVkMGVjMzM1M2QzMTcxNmFjNjBhODgBCIpz: 00:23:27.321 14:58:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmVmNDdkOTNjMjI1ZTliNDk1OGRjM2E5YTY3OWQxNzFpLeuz: ]] 00:23:27.321 14:58:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmVmNDdkOTNjMjI1ZTliNDk1OGRjM2E5YTY3OWQxNzFpLeuz: 00:23:27.321 14:58:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:23:27.321 14:58:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:27.321 14:58:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:27.321 14:58:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:27.321 14:58:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:27.321 14:58:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:27.321 14:58:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:27.321 14:58:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.321 14:58:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.321 14:58:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.321 14:58:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:27.321 14:58:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:27.321 14:58:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:27.321 14:58:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:27.321 14:58:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:27.321 14:58:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:27.321 14:58:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:27.321 14:58:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:27.321 14:58:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:27.321 14:58:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:27.321 14:58:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:27.321 14:58:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:27.321 14:58:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.321 14:58:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.579 nvme0n1 00:23:27.579 14:58:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.579 14:58:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:27.579 14:58:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:27.579 14:58:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.579 14:58:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.579 14:58:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.579 14:58:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:27.579 14:58:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:27.579 14:58:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.579 14:58:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.579 14:58:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.579 14:58:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:27.579 14:58:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:23:27.579 14:58:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:27.579 14:58:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:27.579 14:58:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:27.579 14:58:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:27.579 14:58:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzcyYjg5ZWFhOWZlNDgzMzgxMDRiZjhmYWI1NzYwZTE4ZmI0MDJjMWZmOTFiMDAw7SZ+BA==: 00:23:27.579 14:58:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2Q5NjQ5ZTMwYjIwMWQzNTkyZmQ2YjczYTRlMWMwMWFNse/e: 00:23:27.580 14:58:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:27.580 14:58:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:27.580 14:58:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzcyYjg5ZWFhOWZlNDgzMzgxMDRiZjhmYWI1NzYwZTE4ZmI0MDJjMWZmOTFiMDAw7SZ+BA==: 00:23:27.580 14:58:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2Q5NjQ5ZTMwYjIwMWQzNTkyZmQ2YjczYTRlMWMwMWFNse/e: ]] 00:23:27.580 14:58:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2Q5NjQ5ZTMwYjIwMWQzNTkyZmQ2YjczYTRlMWMwMWFNse/e: 00:23:27.580 14:58:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:23:27.580 14:58:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:27.580 14:58:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:27.580 14:58:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:27.580 14:58:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:27.580 14:58:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:27.580 14:58:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:27.580 14:58:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.580 14:58:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.838 14:58:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.838 14:58:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:27.838 14:58:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:27.838 14:58:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:27.838 14:58:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:27.838 14:58:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:27.838 14:58:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:27.838 14:58:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:27.838 14:58:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:27.838 14:58:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:27.838 14:58:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:27.838 14:58:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:27.838 14:58:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:27.838 14:58:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.838 14:58:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.838 nvme0n1 00:23:27.838 14:58:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.838 14:58:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:27.838 14:58:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:27.838 14:58:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.838 14:58:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.838 14:58:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.096 14:58:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.096 14:58:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:28.096 14:58:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.096 14:58:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.096 14:58:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.096 14:58:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:28.096 14:58:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:23:28.096 14:58:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:28.096 14:58:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:28.096 14:58:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:28.096 14:58:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:28.096 14:58:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjc4NDhhMDRhM2FkYjFlNzEzMmIyOTliNmFhZWZiOGM3Zjc2ZDcyNmRhOThjZGJiZjY5OGEzMTlkYjY5OWRmNE6bBqE=: 00:23:28.096 14:58:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:28.096 14:58:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:28.096 14:58:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:28.096 14:58:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjc4NDhhMDRhM2FkYjFlNzEzMmIyOTliNmFhZWZiOGM3Zjc2ZDcyNmRhOThjZGJiZjY5OGEzMTlkYjY5OWRmNE6bBqE=: 00:23:28.096 14:58:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:28.096 14:58:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:23:28.096 14:58:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:28.096 14:58:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:28.096 14:58:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:28.096 14:58:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:28.096 14:58:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:28.096 14:58:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:28.097 14:58:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.097 14:58:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.097 14:58:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.097 14:58:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:28.097 14:58:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:28.097 14:58:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:28.097 14:58:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:28.097 14:58:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:28.097 14:58:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:28.097 14:58:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:28.097 14:58:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:28.097 14:58:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:28.097 14:58:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:28.097 14:58:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:28.097 14:58:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:28.097 14:58:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.097 14:58:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.354 nvme0n1 00:23:28.354 14:58:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.354 14:58:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:28.354 14:58:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.354 14:58:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.354 14:58:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:28.354 14:58:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.354 14:58:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.354 14:58:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:28.354 14:58:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.354 14:58:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.354 14:58:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.354 14:58:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:28.354 14:58:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:28.354 14:58:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:23:28.354 14:58:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:28.354 14:58:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:28.354 14:58:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:28.354 14:58:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:28.354 14:58:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDhmNDI5ODI1OTczYWRiMTMwNWRhZWY5Y2UyNTAzOTL6lXmu: 00:23:28.354 14:58:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDUyOTU3ODEzNTIwZDNlMzdkOWFkMTFhZDk0ZTI0MzhmNzM0MmRiYjk1YTNmNGYyMGEyNjA4Y2YyZmMzNTM2MTp2VGQ=: 00:23:28.354 14:58:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:28.354 14:58:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:28.354 14:58:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDhmNDI5ODI1OTczYWRiMTMwNWRhZWY5Y2UyNTAzOTL6lXmu: 00:23:28.354 14:58:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDUyOTU3ODEzNTIwZDNlMzdkOWFkMTFhZDk0ZTI0MzhmNzM0MmRiYjk1YTNmNGYyMGEyNjA4Y2YyZmMzNTM2MTp2VGQ=: ]] 00:23:28.354 14:58:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDUyOTU3ODEzNTIwZDNlMzdkOWFkMTFhZDk0ZTI0MzhmNzM0MmRiYjk1YTNmNGYyMGEyNjA4Y2YyZmMzNTM2MTp2VGQ=: 00:23:28.354 14:58:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:23:28.354 14:58:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:28.354 14:58:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:28.354 14:58:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:28.354 14:58:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:28.354 14:58:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:28.354 14:58:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:28.354 14:58:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.354 14:58:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.354 14:58:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.354 14:58:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:28.354 14:58:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:28.354 14:58:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:28.354 14:58:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:28.354 14:58:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:28.354 14:58:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:28.354 14:58:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:28.354 14:58:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:28.354 14:58:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:28.354 14:58:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:28.354 14:58:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:28.354 14:58:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:28.355 14:58:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.355 14:58:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.613 nvme0n1 00:23:28.613 14:58:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.613 14:58:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:28.613 14:58:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:28.613 14:58:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.613 14:58:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.613 14:58:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.613 14:58:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.613 14:58:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:28.613 14:58:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.613 14:58:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.613 14:58:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.613 14:58:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:28.613 14:58:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:23:28.613 14:58:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:28.613 14:58:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:28.613 14:58:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:28.613 14:58:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:28.613 14:58:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGRkMmE5NWUzYTBjMDg5MjRmMTk5YzEwNzEyZTkwNTc4ZGFiMWU3NDYzODNmMzgxtqVLYw==: 00:23:28.613 14:58:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTIzNDhkYjdlMWYwNTk0YmQ3NDc1M2NmNGQ0NDk5ZmY5NTNkNWM2YWFkM2ZhZGU2TE1zGw==: 00:23:28.613 14:58:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:28.613 14:58:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:28.613 14:58:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGRkMmE5NWUzYTBjMDg5MjRmMTk5YzEwNzEyZTkwNTc4ZGFiMWU3NDYzODNmMzgxtqVLYw==: 00:23:28.613 14:58:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTIzNDhkYjdlMWYwNTk0YmQ3NDc1M2NmNGQ0NDk5ZmY5NTNkNWM2YWFkM2ZhZGU2TE1zGw==: ]] 00:23:28.613 14:58:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTIzNDhkYjdlMWYwNTk0YmQ3NDc1M2NmNGQ0NDk5ZmY5NTNkNWM2YWFkM2ZhZGU2TE1zGw==: 00:23:28.613 14:58:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:23:28.613 14:58:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:28.613 14:58:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:28.613 14:58:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:28.613 14:58:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:28.613 14:58:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:28.613 14:58:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:28.613 14:58:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.613 14:58:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.613 14:58:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.613 14:58:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:28.613 14:58:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:28.613 14:58:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:28.613 14:58:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:28.613 14:58:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:28.613 14:58:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:28.613 14:58:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:28.613 14:58:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:28.613 14:58:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:28.613 14:58:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:28.613 14:58:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:28.613 14:58:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:28.613 14:58:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.613 14:58:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.179 nvme0n1 00:23:29.179 14:58:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.179 14:58:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:29.179 14:58:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.179 14:58:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.179 14:58:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:29.179 14:58:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.179 14:58:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.179 14:58:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:29.179 14:58:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.179 14:58:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.179 14:58:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.179 14:58:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:29.179 14:58:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:23:29.179 14:58:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:29.179 14:58:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:29.179 14:58:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:29.179 14:58:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:29.179 14:58:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWIyZjA5NDZmZmVkMGVjMzM1M2QzMTcxNmFjNjBhODgBCIpz: 00:23:29.179 14:58:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmVmNDdkOTNjMjI1ZTliNDk1OGRjM2E5YTY3OWQxNzFpLeuz: 00:23:29.179 14:58:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:29.179 14:58:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:29.179 14:58:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWIyZjA5NDZmZmVkMGVjMzM1M2QzMTcxNmFjNjBhODgBCIpz: 00:23:29.179 14:58:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmVmNDdkOTNjMjI1ZTliNDk1OGRjM2E5YTY3OWQxNzFpLeuz: ]] 00:23:29.179 14:58:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmVmNDdkOTNjMjI1ZTliNDk1OGRjM2E5YTY3OWQxNzFpLeuz: 00:23:29.179 14:58:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:23:29.179 14:58:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:29.179 14:58:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:29.179 14:58:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:29.179 14:58:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:29.179 14:58:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:29.179 14:58:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:29.179 14:58:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.179 14:58:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.179 14:58:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.179 14:58:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:29.179 14:58:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:29.179 14:58:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:29.179 14:58:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:29.179 14:58:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:29.179 14:58:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:29.179 14:58:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:29.179 14:58:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:29.179 14:58:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:29.179 14:58:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:29.179 14:58:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:29.179 14:58:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:29.179 14:58:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.179 14:58:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.440 nvme0n1 00:23:29.440 14:58:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.440 14:58:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:29.440 14:58:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:29.440 14:58:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.440 14:58:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.440 14:58:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.440 14:58:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.440 14:58:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:29.440 14:58:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.440 14:58:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.440 14:58:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.440 14:58:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:29.440 14:58:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:23:29.440 14:58:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:29.440 14:58:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:29.440 14:58:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:29.440 14:58:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:29.440 14:58:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzcyYjg5ZWFhOWZlNDgzMzgxMDRiZjhmYWI1NzYwZTE4ZmI0MDJjMWZmOTFiMDAw7SZ+BA==: 00:23:29.440 14:58:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2Q5NjQ5ZTMwYjIwMWQzNTkyZmQ2YjczYTRlMWMwMWFNse/e: 00:23:29.440 14:58:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:29.440 14:58:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:29.440 14:58:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzcyYjg5ZWFhOWZlNDgzMzgxMDRiZjhmYWI1NzYwZTE4ZmI0MDJjMWZmOTFiMDAw7SZ+BA==: 00:23:29.440 14:58:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2Q5NjQ5ZTMwYjIwMWQzNTkyZmQ2YjczYTRlMWMwMWFNse/e: ]] 00:23:29.440 14:58:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2Q5NjQ5ZTMwYjIwMWQzNTkyZmQ2YjczYTRlMWMwMWFNse/e: 00:23:29.440 14:58:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:23:29.440 14:58:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:29.440 14:58:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:29.440 14:58:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:29.440 14:58:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:29.440 14:58:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:29.440 14:58:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:29.440 14:58:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.440 14:58:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.440 14:58:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.440 14:58:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:29.440 14:58:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:29.440 14:58:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:29.440 14:58:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:29.440 14:58:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:29.440 14:58:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:29.440 14:58:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:29.440 14:58:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:29.440 14:58:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:29.440 14:58:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:29.440 14:58:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:29.440 14:58:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:29.440 14:58:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.440 14:58:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.699 nvme0n1 00:23:29.699 14:58:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.699 14:58:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:29.699 14:58:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:29.699 14:58:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.699 14:58:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.699 14:58:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.956 14:58:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.956 14:58:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:29.956 14:58:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.956 14:58:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.956 14:58:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.956 14:58:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:29.956 14:58:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:23:29.956 14:58:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:29.956 14:58:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:29.956 14:58:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:29.956 14:58:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:29.956 14:58:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjc4NDhhMDRhM2FkYjFlNzEzMmIyOTliNmFhZWZiOGM3Zjc2ZDcyNmRhOThjZGJiZjY5OGEzMTlkYjY5OWRmNE6bBqE=: 00:23:29.956 14:58:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:29.956 14:58:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:29.956 14:58:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:29.956 14:58:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjc4NDhhMDRhM2FkYjFlNzEzMmIyOTliNmFhZWZiOGM3Zjc2ZDcyNmRhOThjZGJiZjY5OGEzMTlkYjY5OWRmNE6bBqE=: 00:23:29.956 14:58:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:29.956 14:58:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:23:29.956 14:58:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:29.956 14:58:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:29.956 14:58:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:29.956 14:58:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:29.956 14:58:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:29.956 14:58:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:29.956 14:58:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.956 14:58:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.956 14:58:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.956 14:58:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:29.956 14:58:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:29.956 14:58:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:29.956 14:58:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:29.956 14:58:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:29.956 14:58:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:29.956 14:58:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:29.956 14:58:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:29.956 14:58:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:29.956 14:58:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:29.956 14:58:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:29.956 14:58:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:29.956 14:58:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.956 14:58:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.214 nvme0n1 00:23:30.214 14:58:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.214 14:58:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:30.214 14:58:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:30.214 14:58:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.214 14:58:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.214 14:58:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.214 14:58:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:30.214 14:58:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:30.214 14:58:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.214 14:58:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.214 14:58:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.214 14:58:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:30.214 14:58:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:30.214 14:58:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:23:30.214 14:58:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:30.214 14:58:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:30.214 14:58:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:30.214 14:58:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:30.214 14:58:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDhmNDI5ODI1OTczYWRiMTMwNWRhZWY5Y2UyNTAzOTL6lXmu: 00:23:30.214 14:58:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDUyOTU3ODEzNTIwZDNlMzdkOWFkMTFhZDk0ZTI0MzhmNzM0MmRiYjk1YTNmNGYyMGEyNjA4Y2YyZmMzNTM2MTp2VGQ=: 00:23:30.214 14:58:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:30.214 14:58:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:30.214 14:58:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDhmNDI5ODI1OTczYWRiMTMwNWRhZWY5Y2UyNTAzOTL6lXmu: 00:23:30.214 14:58:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDUyOTU3ODEzNTIwZDNlMzdkOWFkMTFhZDk0ZTI0MzhmNzM0MmRiYjk1YTNmNGYyMGEyNjA4Y2YyZmMzNTM2MTp2VGQ=: ]] 00:23:30.214 14:58:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDUyOTU3ODEzNTIwZDNlMzdkOWFkMTFhZDk0ZTI0MzhmNzM0MmRiYjk1YTNmNGYyMGEyNjA4Y2YyZmMzNTM2MTp2VGQ=: 00:23:30.214 14:58:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:23:30.214 14:58:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:30.214 14:58:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:30.214 14:58:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:30.214 14:58:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:30.214 14:58:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:30.214 14:58:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:30.214 14:58:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.214 14:58:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.214 14:58:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.214 14:58:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:30.214 14:58:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:30.214 14:58:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:30.214 14:58:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:30.214 14:58:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:30.214 14:58:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:30.214 14:58:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:30.214 14:58:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:30.214 14:58:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:30.214 14:58:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:30.214 14:58:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:30.214 14:58:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:30.214 14:58:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.214 14:58:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.780 nvme0n1 00:23:30.780 14:58:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.780 14:58:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:30.780 14:58:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:30.780 14:58:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.780 14:58:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.780 14:58:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.780 14:58:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:30.780 14:58:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:30.780 14:58:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.780 14:58:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.780 14:58:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.780 14:58:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:30.780 14:58:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:23:30.780 14:58:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:30.780 14:58:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:30.780 14:58:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:30.780 14:58:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:30.780 14:58:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGRkMmE5NWUzYTBjMDg5MjRmMTk5YzEwNzEyZTkwNTc4ZGFiMWU3NDYzODNmMzgxtqVLYw==: 00:23:30.780 14:58:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTIzNDhkYjdlMWYwNTk0YmQ3NDc1M2NmNGQ0NDk5ZmY5NTNkNWM2YWFkM2ZhZGU2TE1zGw==: 00:23:30.780 14:58:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:30.780 14:58:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:30.780 14:58:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGRkMmE5NWUzYTBjMDg5MjRmMTk5YzEwNzEyZTkwNTc4ZGFiMWU3NDYzODNmMzgxtqVLYw==: 00:23:30.780 14:58:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTIzNDhkYjdlMWYwNTk0YmQ3NDc1M2NmNGQ0NDk5ZmY5NTNkNWM2YWFkM2ZhZGU2TE1zGw==: ]] 00:23:30.780 14:58:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTIzNDhkYjdlMWYwNTk0YmQ3NDc1M2NmNGQ0NDk5ZmY5NTNkNWM2YWFkM2ZhZGU2TE1zGw==: 00:23:30.780 14:58:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:23:30.780 14:58:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:30.780 14:58:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:30.780 14:58:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:30.780 14:58:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:30.780 14:58:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:30.780 14:58:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:30.780 14:58:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.780 14:58:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.780 14:58:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.780 14:58:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:30.780 14:58:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:30.780 14:58:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:30.780 14:58:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:30.780 14:58:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:30.780 14:58:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:30.780 14:58:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:30.780 14:58:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:30.780 14:58:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:30.780 14:58:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:30.780 14:58:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:30.780 14:58:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:30.780 14:58:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.780 14:58:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.345 nvme0n1 00:23:31.345 14:58:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.345 14:58:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:31.345 14:58:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:31.345 14:58:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.345 14:58:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.345 14:58:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.345 14:58:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.345 14:58:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:31.346 14:58:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.346 14:58:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.346 14:58:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.346 14:58:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:31.346 14:58:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:23:31.346 14:58:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:31.346 14:58:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:31.346 14:58:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:31.346 14:58:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:31.346 14:58:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWIyZjA5NDZmZmVkMGVjMzM1M2QzMTcxNmFjNjBhODgBCIpz: 00:23:31.346 14:58:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmVmNDdkOTNjMjI1ZTliNDk1OGRjM2E5YTY3OWQxNzFpLeuz: 00:23:31.346 14:58:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:31.346 14:58:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:31.346 14:58:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWIyZjA5NDZmZmVkMGVjMzM1M2QzMTcxNmFjNjBhODgBCIpz: 00:23:31.346 14:58:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmVmNDdkOTNjMjI1ZTliNDk1OGRjM2E5YTY3OWQxNzFpLeuz: ]] 00:23:31.346 14:58:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmVmNDdkOTNjMjI1ZTliNDk1OGRjM2E5YTY3OWQxNzFpLeuz: 00:23:31.346 14:58:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:23:31.346 14:58:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:31.346 14:58:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:31.346 14:58:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:31.346 14:58:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:31.346 14:58:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:31.346 14:58:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:31.346 14:58:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.346 14:58:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.346 14:58:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.346 14:58:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:31.346 14:58:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:31.346 14:58:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:31.346 14:58:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:31.346 14:58:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:31.346 14:58:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:31.346 14:58:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:31.346 14:58:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:31.346 14:58:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:31.346 14:58:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:31.346 14:58:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:31.346 14:58:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:31.346 14:58:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.346 14:58:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.603 nvme0n1 00:23:31.603 14:58:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.603 14:58:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:31.603 14:58:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:31.603 14:58:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.603 14:58:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.603 14:58:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.860 14:58:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.860 14:58:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:31.860 14:58:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.860 14:58:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.860 14:58:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.860 14:58:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:31.860 14:58:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:23:31.860 14:58:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:31.860 14:58:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:31.860 14:58:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:31.860 14:58:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:31.860 14:58:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzcyYjg5ZWFhOWZlNDgzMzgxMDRiZjhmYWI1NzYwZTE4ZmI0MDJjMWZmOTFiMDAw7SZ+BA==: 00:23:31.860 14:58:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2Q5NjQ5ZTMwYjIwMWQzNTkyZmQ2YjczYTRlMWMwMWFNse/e: 00:23:31.860 14:58:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:31.860 14:58:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:31.860 14:58:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzcyYjg5ZWFhOWZlNDgzMzgxMDRiZjhmYWI1NzYwZTE4ZmI0MDJjMWZmOTFiMDAw7SZ+BA==: 00:23:31.860 14:58:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2Q5NjQ5ZTMwYjIwMWQzNTkyZmQ2YjczYTRlMWMwMWFNse/e: ]] 00:23:31.860 14:58:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2Q5NjQ5ZTMwYjIwMWQzNTkyZmQ2YjczYTRlMWMwMWFNse/e: 00:23:31.860 14:58:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:23:31.860 14:58:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:31.860 14:58:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:31.860 14:58:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:31.860 14:58:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:31.860 14:58:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:31.860 14:58:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:31.860 14:58:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.860 14:58:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.860 14:58:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.860 14:58:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:31.860 14:58:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:31.860 14:58:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:31.860 14:58:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:31.860 14:58:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:31.860 14:58:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:31.860 14:58:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:31.860 14:58:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:31.860 14:58:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:31.860 14:58:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:31.860 14:58:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:31.860 14:58:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:31.860 14:58:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.860 14:58:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.117 nvme0n1 00:23:32.117 14:58:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.117 14:58:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:32.117 14:58:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:32.117 14:58:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.117 14:58:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.117 14:58:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.374 14:58:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:32.374 14:58:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:32.374 14:58:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.374 14:58:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.374 14:58:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.374 14:58:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:32.374 14:58:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:23:32.374 14:58:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:32.374 14:58:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:32.374 14:58:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:32.374 14:58:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:32.374 14:58:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjc4NDhhMDRhM2FkYjFlNzEzMmIyOTliNmFhZWZiOGM3Zjc2ZDcyNmRhOThjZGJiZjY5OGEzMTlkYjY5OWRmNE6bBqE=: 00:23:32.374 14:58:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:32.374 14:58:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:32.374 14:58:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:32.374 14:58:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjc4NDhhMDRhM2FkYjFlNzEzMmIyOTliNmFhZWZiOGM3Zjc2ZDcyNmRhOThjZGJiZjY5OGEzMTlkYjY5OWRmNE6bBqE=: 00:23:32.374 14:58:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:32.374 14:58:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:23:32.374 14:58:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:32.374 14:58:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:32.374 14:58:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:32.374 14:58:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:32.374 14:58:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:32.374 14:58:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:32.374 14:58:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.374 14:58:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.374 14:58:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.374 14:58:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:32.374 14:58:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:32.374 14:58:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:32.374 14:58:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:32.374 14:58:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:32.374 14:58:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:32.374 14:58:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:32.374 14:58:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:32.374 14:58:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:32.374 14:58:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:32.374 14:58:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:32.374 14:58:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:32.374 14:58:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.374 14:58:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.632 nvme0n1 00:23:32.632 14:58:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.632 14:58:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:32.632 14:58:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:32.632 14:58:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.632 14:58:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.632 14:58:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.890 14:58:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:32.890 14:58:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:32.890 14:58:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.890 14:58:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.890 14:58:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.890 14:58:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:32.890 14:58:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:32.890 14:58:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:23:32.890 14:58:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:32.890 14:58:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:32.890 14:58:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:32.890 14:58:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:32.890 14:58:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDhmNDI5ODI1OTczYWRiMTMwNWRhZWY5Y2UyNTAzOTL6lXmu: 00:23:32.890 14:58:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDUyOTU3ODEzNTIwZDNlMzdkOWFkMTFhZDk0ZTI0MzhmNzM0MmRiYjk1YTNmNGYyMGEyNjA4Y2YyZmMzNTM2MTp2VGQ=: 00:23:32.890 14:58:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:32.890 14:58:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:32.890 14:58:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDhmNDI5ODI1OTczYWRiMTMwNWRhZWY5Y2UyNTAzOTL6lXmu: 00:23:32.890 14:58:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDUyOTU3ODEzNTIwZDNlMzdkOWFkMTFhZDk0ZTI0MzhmNzM0MmRiYjk1YTNmNGYyMGEyNjA4Y2YyZmMzNTM2MTp2VGQ=: ]] 00:23:32.890 14:58:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDUyOTU3ODEzNTIwZDNlMzdkOWFkMTFhZDk0ZTI0MzhmNzM0MmRiYjk1YTNmNGYyMGEyNjA4Y2YyZmMzNTM2MTp2VGQ=: 00:23:32.890 14:58:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:23:32.890 14:58:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:32.890 14:58:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:32.890 14:58:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:32.890 14:58:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:32.890 14:58:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:32.890 14:58:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:32.890 14:58:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.890 14:58:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.890 14:58:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.890 14:58:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:32.890 14:58:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:32.890 14:58:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:32.890 14:58:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:32.890 14:58:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:32.890 14:58:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:32.890 14:58:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:32.890 14:58:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:32.890 14:58:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:32.890 14:58:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:32.890 14:58:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:32.890 14:58:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:32.890 14:58:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.890 14:58:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.455 nvme0n1 00:23:33.455 14:58:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.455 14:58:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:33.455 14:58:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:33.455 14:58:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.455 14:58:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.455 14:58:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.455 14:58:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:33.455 14:58:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:33.455 14:58:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.455 14:58:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.455 14:58:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.455 14:58:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:33.455 14:58:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:23:33.455 14:58:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:33.455 14:58:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:33.455 14:58:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:33.455 14:58:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:33.455 14:58:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGRkMmE5NWUzYTBjMDg5MjRmMTk5YzEwNzEyZTkwNTc4ZGFiMWU3NDYzODNmMzgxtqVLYw==: 00:23:33.455 14:58:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTIzNDhkYjdlMWYwNTk0YmQ3NDc1M2NmNGQ0NDk5ZmY5NTNkNWM2YWFkM2ZhZGU2TE1zGw==: 00:23:33.455 14:58:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:33.455 14:58:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:33.455 14:58:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGRkMmE5NWUzYTBjMDg5MjRmMTk5YzEwNzEyZTkwNTc4ZGFiMWU3NDYzODNmMzgxtqVLYw==: 00:23:33.455 14:58:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTIzNDhkYjdlMWYwNTk0YmQ3NDc1M2NmNGQ0NDk5ZmY5NTNkNWM2YWFkM2ZhZGU2TE1zGw==: ]] 00:23:33.455 14:58:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTIzNDhkYjdlMWYwNTk0YmQ3NDc1M2NmNGQ0NDk5ZmY5NTNkNWM2YWFkM2ZhZGU2TE1zGw==: 00:23:33.455 14:58:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:23:33.455 14:58:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:33.455 14:58:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:33.455 14:58:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:33.455 14:58:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:33.455 14:58:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:33.455 14:58:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:33.455 14:58:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.455 14:58:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.455 14:58:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.455 14:58:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:33.455 14:58:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:33.455 14:58:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:33.455 14:58:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:33.455 14:58:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:33.455 14:58:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:33.455 14:58:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:33.455 14:58:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:33.455 14:58:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:33.455 14:58:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:33.455 14:58:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:33.455 14:58:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:33.455 14:58:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.455 14:58:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.022 nvme0n1 00:23:34.022 14:58:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.022 14:58:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:34.022 14:58:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:34.022 14:58:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.022 14:58:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.280 14:58:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.280 14:58:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:34.280 14:58:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:34.280 14:58:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.280 14:58:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.280 14:58:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.280 14:58:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:34.280 14:58:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:23:34.280 14:58:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:34.280 14:58:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:34.280 14:58:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:34.280 14:58:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:34.280 14:58:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWIyZjA5NDZmZmVkMGVjMzM1M2QzMTcxNmFjNjBhODgBCIpz: 00:23:34.280 14:58:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmVmNDdkOTNjMjI1ZTliNDk1OGRjM2E5YTY3OWQxNzFpLeuz: 00:23:34.280 14:58:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:34.280 14:58:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:34.280 14:58:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWIyZjA5NDZmZmVkMGVjMzM1M2QzMTcxNmFjNjBhODgBCIpz: 00:23:34.280 14:58:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmVmNDdkOTNjMjI1ZTliNDk1OGRjM2E5YTY3OWQxNzFpLeuz: ]] 00:23:34.280 14:58:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmVmNDdkOTNjMjI1ZTliNDk1OGRjM2E5YTY3OWQxNzFpLeuz: 00:23:34.280 14:58:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:23:34.281 14:58:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:34.281 14:58:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:34.281 14:58:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:34.281 14:58:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:34.281 14:58:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:34.281 14:58:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:34.281 14:58:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.281 14:58:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.281 14:58:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.281 14:58:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:34.281 14:58:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:34.281 14:58:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:34.281 14:58:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:34.281 14:58:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:34.281 14:58:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:34.281 14:58:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:34.281 14:58:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:34.281 14:58:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:34.281 14:58:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:34.281 14:58:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:34.281 14:58:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:34.281 14:58:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.281 14:58:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.844 nvme0n1 00:23:34.844 14:58:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.844 14:58:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:34.844 14:58:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:34.844 14:58:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.844 14:58:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.844 14:58:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.844 14:58:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:34.844 14:58:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:34.844 14:58:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.844 14:58:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.844 14:58:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.844 14:58:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:34.844 14:58:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:23:34.844 14:58:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:34.844 14:58:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:34.844 14:58:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:34.844 14:58:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:34.844 14:58:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzcyYjg5ZWFhOWZlNDgzMzgxMDRiZjhmYWI1NzYwZTE4ZmI0MDJjMWZmOTFiMDAw7SZ+BA==: 00:23:34.844 14:58:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2Q5NjQ5ZTMwYjIwMWQzNTkyZmQ2YjczYTRlMWMwMWFNse/e: 00:23:34.844 14:58:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:34.844 14:58:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:34.844 14:58:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzcyYjg5ZWFhOWZlNDgzMzgxMDRiZjhmYWI1NzYwZTE4ZmI0MDJjMWZmOTFiMDAw7SZ+BA==: 00:23:34.844 14:58:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2Q5NjQ5ZTMwYjIwMWQzNTkyZmQ2YjczYTRlMWMwMWFNse/e: ]] 00:23:34.844 14:58:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2Q5NjQ5ZTMwYjIwMWQzNTkyZmQ2YjczYTRlMWMwMWFNse/e: 00:23:34.844 14:58:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:23:34.844 14:58:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:34.845 14:58:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:34.845 14:58:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:34.845 14:58:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:34.845 14:58:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:34.845 14:58:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:34.845 14:58:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.845 14:58:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.845 14:58:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.845 14:58:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:34.845 14:58:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:34.845 14:58:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:34.845 14:58:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:34.845 14:58:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:34.845 14:58:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:34.845 14:58:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:34.845 14:58:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:34.845 14:58:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:34.845 14:58:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:34.845 14:58:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:34.845 14:58:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:34.845 14:58:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.845 14:58:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.776 nvme0n1 00:23:35.776 14:58:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.776 14:58:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:35.776 14:58:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:35.776 14:58:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.776 14:58:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.776 14:58:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.776 14:58:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:35.776 14:58:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:35.776 14:58:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.776 14:58:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.776 14:58:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.776 14:58:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:35.776 14:58:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:23:35.776 14:58:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:35.776 14:58:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:35.776 14:58:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:35.776 14:58:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:35.776 14:58:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjc4NDhhMDRhM2FkYjFlNzEzMmIyOTliNmFhZWZiOGM3Zjc2ZDcyNmRhOThjZGJiZjY5OGEzMTlkYjY5OWRmNE6bBqE=: 00:23:35.776 14:58:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:35.776 14:58:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:35.776 14:58:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:35.776 14:58:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjc4NDhhMDRhM2FkYjFlNzEzMmIyOTliNmFhZWZiOGM3Zjc2ZDcyNmRhOThjZGJiZjY5OGEzMTlkYjY5OWRmNE6bBqE=: 00:23:35.776 14:58:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:35.776 14:58:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:23:35.776 14:58:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:35.776 14:58:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:35.776 14:58:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:35.776 14:58:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:35.776 14:58:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:35.776 14:58:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:35.776 14:58:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.776 14:58:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.776 14:58:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.776 14:58:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:35.776 14:58:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:35.776 14:58:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:35.776 14:58:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:35.776 14:58:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:35.776 14:58:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:35.776 14:58:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:35.776 14:58:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:35.776 14:58:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:35.776 14:58:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:35.776 14:58:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:35.776 14:58:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:35.776 14:58:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.776 14:58:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.341 nvme0n1 00:23:36.341 14:58:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.341 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:36.341 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:36.341 14:58:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.341 14:58:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.341 14:58:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.341 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:36.341 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:36.341 14:58:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.341 14:58:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.341 14:58:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.341 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:36.341 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:36.341 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:36.341 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:23:36.341 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:36.341 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:36.341 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:36.341 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:36.341 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDhmNDI5ODI1OTczYWRiMTMwNWRhZWY5Y2UyNTAzOTL6lXmu: 00:23:36.341 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDUyOTU3ODEzNTIwZDNlMzdkOWFkMTFhZDk0ZTI0MzhmNzM0MmRiYjk1YTNmNGYyMGEyNjA4Y2YyZmMzNTM2MTp2VGQ=: 00:23:36.341 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:36.341 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:36.341 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDhmNDI5ODI1OTczYWRiMTMwNWRhZWY5Y2UyNTAzOTL6lXmu: 00:23:36.341 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDUyOTU3ODEzNTIwZDNlMzdkOWFkMTFhZDk0ZTI0MzhmNzM0MmRiYjk1YTNmNGYyMGEyNjA4Y2YyZmMzNTM2MTp2VGQ=: ]] 00:23:36.341 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDUyOTU3ODEzNTIwZDNlMzdkOWFkMTFhZDk0ZTI0MzhmNzM0MmRiYjk1YTNmNGYyMGEyNjA4Y2YyZmMzNTM2MTp2VGQ=: 00:23:36.341 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:23:36.341 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:36.342 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:36.342 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:36.342 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:36.342 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:36.342 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:36.342 14:58:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.342 14:58:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.342 14:58:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.342 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:36.342 14:58:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:36.342 14:58:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:36.342 14:58:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:36.342 14:58:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:36.342 14:58:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:36.342 14:58:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:36.342 14:58:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:36.342 14:58:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:36.342 14:58:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:36.342 14:58:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:36.342 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:36.342 14:58:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.342 14:58:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.599 nvme0n1 00:23:36.599 14:58:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.599 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:36.599 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:36.599 14:58:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.599 14:58:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.599 14:58:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.599 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:36.600 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:36.600 14:58:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.600 14:58:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.600 14:58:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.600 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:36.600 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:23:36.600 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:36.600 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:36.600 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:36.600 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:36.600 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGRkMmE5NWUzYTBjMDg5MjRmMTk5YzEwNzEyZTkwNTc4ZGFiMWU3NDYzODNmMzgxtqVLYw==: 00:23:36.600 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTIzNDhkYjdlMWYwNTk0YmQ3NDc1M2NmNGQ0NDk5ZmY5NTNkNWM2YWFkM2ZhZGU2TE1zGw==: 00:23:36.600 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:36.600 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:36.600 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGRkMmE5NWUzYTBjMDg5MjRmMTk5YzEwNzEyZTkwNTc4ZGFiMWU3NDYzODNmMzgxtqVLYw==: 00:23:36.600 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTIzNDhkYjdlMWYwNTk0YmQ3NDc1M2NmNGQ0NDk5ZmY5NTNkNWM2YWFkM2ZhZGU2TE1zGw==: ]] 00:23:36.600 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTIzNDhkYjdlMWYwNTk0YmQ3NDc1M2NmNGQ0NDk5ZmY5NTNkNWM2YWFkM2ZhZGU2TE1zGw==: 00:23:36.600 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:23:36.600 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:36.600 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:36.600 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:36.600 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:36.600 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:36.600 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:36.600 14:58:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.600 14:58:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.600 14:58:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.600 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:36.600 14:58:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:36.600 14:58:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:36.600 14:58:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:36.600 14:58:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:36.600 14:58:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:36.600 14:58:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:36.600 14:58:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:36.600 14:58:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:36.600 14:58:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:36.600 14:58:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:36.600 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:36.600 14:58:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.600 14:58:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.858 nvme0n1 00:23:36.858 14:58:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.858 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:36.858 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:36.858 14:58:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.858 14:58:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.858 14:58:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.858 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:36.858 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:36.858 14:58:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.858 14:58:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.858 14:58:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.858 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:36.858 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:23:36.858 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:36.858 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:36.858 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:36.858 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:36.858 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWIyZjA5NDZmZmVkMGVjMzM1M2QzMTcxNmFjNjBhODgBCIpz: 00:23:36.858 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmVmNDdkOTNjMjI1ZTliNDk1OGRjM2E5YTY3OWQxNzFpLeuz: 00:23:36.858 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:36.858 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:36.858 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWIyZjA5NDZmZmVkMGVjMzM1M2QzMTcxNmFjNjBhODgBCIpz: 00:23:36.858 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmVmNDdkOTNjMjI1ZTliNDk1OGRjM2E5YTY3OWQxNzFpLeuz: ]] 00:23:36.858 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmVmNDdkOTNjMjI1ZTliNDk1OGRjM2E5YTY3OWQxNzFpLeuz: 00:23:36.858 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:23:36.858 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:36.858 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:36.858 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:36.858 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:36.858 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:36.858 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:36.858 14:58:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.858 14:58:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.858 14:58:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.858 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:36.858 14:58:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:36.858 14:58:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:36.858 14:58:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:36.858 14:58:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:36.858 14:58:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:36.858 14:58:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:36.859 14:58:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:36.859 14:58:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:36.859 14:58:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:36.859 14:58:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:36.859 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:36.859 14:58:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.859 14:58:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.117 nvme0n1 00:23:37.118 14:58:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.118 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:37.118 14:58:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.118 14:58:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.118 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:37.118 14:58:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.118 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:37.118 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:37.118 14:58:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.118 14:58:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.118 14:58:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.118 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:37.118 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:23:37.118 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:37.118 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:37.118 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:37.118 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:37.118 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzcyYjg5ZWFhOWZlNDgzMzgxMDRiZjhmYWI1NzYwZTE4ZmI0MDJjMWZmOTFiMDAw7SZ+BA==: 00:23:37.118 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2Q5NjQ5ZTMwYjIwMWQzNTkyZmQ2YjczYTRlMWMwMWFNse/e: 00:23:37.118 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:37.118 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:37.118 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzcyYjg5ZWFhOWZlNDgzMzgxMDRiZjhmYWI1NzYwZTE4ZmI0MDJjMWZmOTFiMDAw7SZ+BA==: 00:23:37.118 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2Q5NjQ5ZTMwYjIwMWQzNTkyZmQ2YjczYTRlMWMwMWFNse/e: ]] 00:23:37.118 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2Q5NjQ5ZTMwYjIwMWQzNTkyZmQ2YjczYTRlMWMwMWFNse/e: 00:23:37.118 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:23:37.118 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:37.118 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:37.118 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:37.118 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:37.118 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:37.118 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:37.118 14:58:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.118 14:58:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.118 14:58:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.118 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:37.118 14:58:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:37.118 14:58:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:37.118 14:58:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:37.118 14:58:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:37.118 14:58:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:37.118 14:58:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:37.118 14:58:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:37.118 14:58:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:37.118 14:58:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:37.118 14:58:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:37.118 14:58:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:37.118 14:58:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.118 14:58:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.377 nvme0n1 00:23:37.377 14:58:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.377 14:58:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:37.377 14:58:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.377 14:58:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.377 14:58:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:37.377 14:58:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.377 14:58:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:37.377 14:58:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:37.377 14:58:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.377 14:58:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.377 14:58:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.377 14:58:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:37.377 14:58:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:23:37.377 14:58:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:37.377 14:58:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:37.377 14:58:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:37.377 14:58:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:37.377 14:58:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjc4NDhhMDRhM2FkYjFlNzEzMmIyOTliNmFhZWZiOGM3Zjc2ZDcyNmRhOThjZGJiZjY5OGEzMTlkYjY5OWRmNE6bBqE=: 00:23:37.377 14:58:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:37.377 14:58:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:37.377 14:58:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:37.377 14:58:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjc4NDhhMDRhM2FkYjFlNzEzMmIyOTliNmFhZWZiOGM3Zjc2ZDcyNmRhOThjZGJiZjY5OGEzMTlkYjY5OWRmNE6bBqE=: 00:23:37.377 14:58:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:37.377 14:58:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:23:37.377 14:58:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:37.377 14:58:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:37.377 14:58:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:37.377 14:58:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:37.377 14:58:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:37.377 14:58:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:37.377 14:58:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.377 14:58:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.377 14:58:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.377 14:58:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:37.377 14:58:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:37.377 14:58:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:37.377 14:58:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:37.377 14:58:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:37.377 14:58:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:37.377 14:58:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:37.377 14:58:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:37.377 14:58:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:37.377 14:58:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:37.377 14:58:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:37.377 14:58:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:37.377 14:58:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.377 14:58:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.636 nvme0n1 00:23:37.636 14:58:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.636 14:58:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:37.636 14:58:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:37.636 14:58:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.636 14:58:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.636 14:58:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.636 14:58:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:37.636 14:58:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:37.636 14:58:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.636 14:58:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.636 14:58:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.636 14:58:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:37.636 14:58:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:37.636 14:58:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:23:37.636 14:58:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:37.636 14:58:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:37.636 14:58:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:37.636 14:58:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:37.636 14:58:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDhmNDI5ODI1OTczYWRiMTMwNWRhZWY5Y2UyNTAzOTL6lXmu: 00:23:37.636 14:58:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDUyOTU3ODEzNTIwZDNlMzdkOWFkMTFhZDk0ZTI0MzhmNzM0MmRiYjk1YTNmNGYyMGEyNjA4Y2YyZmMzNTM2MTp2VGQ=: 00:23:37.636 14:58:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:37.636 14:58:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:37.636 14:58:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDhmNDI5ODI1OTczYWRiMTMwNWRhZWY5Y2UyNTAzOTL6lXmu: 00:23:37.636 14:58:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDUyOTU3ODEzNTIwZDNlMzdkOWFkMTFhZDk0ZTI0MzhmNzM0MmRiYjk1YTNmNGYyMGEyNjA4Y2YyZmMzNTM2MTp2VGQ=: ]] 00:23:37.636 14:58:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDUyOTU3ODEzNTIwZDNlMzdkOWFkMTFhZDk0ZTI0MzhmNzM0MmRiYjk1YTNmNGYyMGEyNjA4Y2YyZmMzNTM2MTp2VGQ=: 00:23:37.636 14:58:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:23:37.636 14:58:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:37.636 14:58:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:37.636 14:58:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:37.636 14:58:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:37.636 14:58:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:37.636 14:58:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:37.636 14:58:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.636 14:58:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.894 14:58:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.894 14:58:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:37.894 14:58:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:37.894 14:58:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:37.894 14:58:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:37.894 14:58:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:37.894 14:58:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:37.894 14:58:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:37.894 14:58:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:37.894 14:58:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:37.894 14:58:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:37.894 14:58:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:37.894 14:58:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:37.894 14:58:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.894 14:58:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.894 nvme0n1 00:23:37.894 14:58:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.894 14:58:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:37.894 14:58:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.895 14:58:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.895 14:58:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:37.895 14:58:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.153 14:58:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:38.153 14:58:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:38.153 14:58:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.153 14:58:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.153 14:58:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.153 14:58:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:38.153 14:58:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:23:38.153 14:58:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:38.153 14:58:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:38.153 14:58:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:38.153 14:58:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:38.153 14:58:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGRkMmE5NWUzYTBjMDg5MjRmMTk5YzEwNzEyZTkwNTc4ZGFiMWU3NDYzODNmMzgxtqVLYw==: 00:23:38.153 14:58:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTIzNDhkYjdlMWYwNTk0YmQ3NDc1M2NmNGQ0NDk5ZmY5NTNkNWM2YWFkM2ZhZGU2TE1zGw==: 00:23:38.153 14:58:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:38.153 14:58:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:38.153 14:58:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGRkMmE5NWUzYTBjMDg5MjRmMTk5YzEwNzEyZTkwNTc4ZGFiMWU3NDYzODNmMzgxtqVLYw==: 00:23:38.153 14:58:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTIzNDhkYjdlMWYwNTk0YmQ3NDc1M2NmNGQ0NDk5ZmY5NTNkNWM2YWFkM2ZhZGU2TE1zGw==: ]] 00:23:38.153 14:58:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTIzNDhkYjdlMWYwNTk0YmQ3NDc1M2NmNGQ0NDk5ZmY5NTNkNWM2YWFkM2ZhZGU2TE1zGw==: 00:23:38.153 14:58:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:23:38.153 14:58:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:38.153 14:58:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:38.153 14:58:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:38.153 14:58:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:38.153 14:58:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:38.153 14:58:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:38.153 14:58:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.153 14:58:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.153 14:58:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.153 14:58:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:38.153 14:58:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:38.153 14:58:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:38.153 14:58:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:38.153 14:58:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:38.153 14:58:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:38.153 14:58:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:38.153 14:58:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:38.153 14:58:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:38.153 14:58:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:38.153 14:58:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:38.153 14:58:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:38.153 14:58:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.153 14:58:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.412 nvme0n1 00:23:38.412 14:58:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.412 14:58:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:38.412 14:58:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:38.412 14:58:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.412 14:58:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.412 14:58:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.412 14:58:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:38.412 14:58:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:38.412 14:58:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.412 14:58:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.412 14:58:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.412 14:58:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:38.412 14:58:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:23:38.412 14:58:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:38.412 14:58:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:38.412 14:58:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:38.412 14:58:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:38.412 14:58:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWIyZjA5NDZmZmVkMGVjMzM1M2QzMTcxNmFjNjBhODgBCIpz: 00:23:38.412 14:58:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmVmNDdkOTNjMjI1ZTliNDk1OGRjM2E5YTY3OWQxNzFpLeuz: 00:23:38.412 14:58:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:38.412 14:58:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:38.412 14:58:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWIyZjA5NDZmZmVkMGVjMzM1M2QzMTcxNmFjNjBhODgBCIpz: 00:23:38.412 14:58:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmVmNDdkOTNjMjI1ZTliNDk1OGRjM2E5YTY3OWQxNzFpLeuz: ]] 00:23:38.412 14:58:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmVmNDdkOTNjMjI1ZTliNDk1OGRjM2E5YTY3OWQxNzFpLeuz: 00:23:38.412 14:58:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:23:38.412 14:58:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:38.412 14:58:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:38.412 14:58:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:38.412 14:58:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:38.412 14:58:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:38.412 14:58:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:38.412 14:58:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.412 14:58:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.412 14:58:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.412 14:58:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:38.412 14:58:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:38.412 14:58:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:38.412 14:58:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:38.412 14:58:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:38.412 14:58:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:38.412 14:58:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:38.412 14:58:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:38.412 14:58:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:38.412 14:58:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:38.412 14:58:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:38.412 14:58:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:38.412 14:58:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.412 14:58:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.670 nvme0n1 00:23:38.670 14:58:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.670 14:58:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:38.670 14:58:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:38.670 14:58:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.670 14:58:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.670 14:58:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.670 14:58:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:38.670 14:58:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:38.670 14:58:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.670 14:58:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.670 14:58:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.670 14:58:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:38.670 14:58:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:23:38.671 14:58:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:38.671 14:58:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:38.671 14:58:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:38.671 14:58:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:38.671 14:58:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzcyYjg5ZWFhOWZlNDgzMzgxMDRiZjhmYWI1NzYwZTE4ZmI0MDJjMWZmOTFiMDAw7SZ+BA==: 00:23:38.671 14:58:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2Q5NjQ5ZTMwYjIwMWQzNTkyZmQ2YjczYTRlMWMwMWFNse/e: 00:23:38.671 14:58:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:38.671 14:58:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:38.671 14:58:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzcyYjg5ZWFhOWZlNDgzMzgxMDRiZjhmYWI1NzYwZTE4ZmI0MDJjMWZmOTFiMDAw7SZ+BA==: 00:23:38.671 14:58:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2Q5NjQ5ZTMwYjIwMWQzNTkyZmQ2YjczYTRlMWMwMWFNse/e: ]] 00:23:38.671 14:58:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2Q5NjQ5ZTMwYjIwMWQzNTkyZmQ2YjczYTRlMWMwMWFNse/e: 00:23:38.671 14:58:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:23:38.671 14:58:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:38.671 14:58:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:38.671 14:58:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:38.671 14:58:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:38.671 14:58:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:38.671 14:58:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:38.671 14:58:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.671 14:58:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.671 14:58:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.671 14:58:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:38.671 14:58:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:38.671 14:58:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:38.671 14:58:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:38.671 14:58:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:38.671 14:58:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:38.671 14:58:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:38.671 14:58:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:38.671 14:58:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:38.671 14:58:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:38.671 14:58:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:38.671 14:58:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:38.671 14:58:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.671 14:58:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.929 nvme0n1 00:23:38.929 14:58:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.929 14:58:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:38.929 14:58:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:38.929 14:58:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.929 14:58:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.929 14:58:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.929 14:58:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:38.929 14:58:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:38.929 14:58:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.929 14:58:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.929 14:58:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.929 14:58:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:38.929 14:58:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:23:38.929 14:58:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:38.929 14:58:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:38.929 14:58:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:38.929 14:58:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:38.929 14:58:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjc4NDhhMDRhM2FkYjFlNzEzMmIyOTliNmFhZWZiOGM3Zjc2ZDcyNmRhOThjZGJiZjY5OGEzMTlkYjY5OWRmNE6bBqE=: 00:23:38.929 14:58:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:38.929 14:58:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:38.929 14:58:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:38.929 14:58:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjc4NDhhMDRhM2FkYjFlNzEzMmIyOTliNmFhZWZiOGM3Zjc2ZDcyNmRhOThjZGJiZjY5OGEzMTlkYjY5OWRmNE6bBqE=: 00:23:38.929 14:58:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:38.929 14:58:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:23:38.929 14:58:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:38.929 14:58:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:38.929 14:58:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:38.929 14:58:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:38.929 14:58:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:38.929 14:58:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:38.929 14:58:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.929 14:58:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.929 14:58:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.929 14:58:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:38.929 14:58:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:38.929 14:58:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:38.929 14:58:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:38.929 14:58:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:38.929 14:58:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:38.929 14:58:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:38.929 14:58:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:38.929 14:58:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:38.929 14:58:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:38.929 14:58:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:38.929 14:58:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:38.929 14:58:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.929 14:58:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.187 nvme0n1 00:23:39.187 14:58:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.187 14:58:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:39.187 14:58:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.187 14:58:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.187 14:58:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:39.187 14:58:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.187 14:58:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:39.187 14:58:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:39.187 14:58:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.187 14:58:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.187 14:58:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.187 14:58:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:39.187 14:58:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:39.187 14:58:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:23:39.188 14:58:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:39.188 14:58:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:39.188 14:58:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:39.188 14:58:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:39.188 14:58:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDhmNDI5ODI1OTczYWRiMTMwNWRhZWY5Y2UyNTAzOTL6lXmu: 00:23:39.188 14:58:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDUyOTU3ODEzNTIwZDNlMzdkOWFkMTFhZDk0ZTI0MzhmNzM0MmRiYjk1YTNmNGYyMGEyNjA4Y2YyZmMzNTM2MTp2VGQ=: 00:23:39.188 14:58:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:39.188 14:58:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:39.188 14:58:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDhmNDI5ODI1OTczYWRiMTMwNWRhZWY5Y2UyNTAzOTL6lXmu: 00:23:39.188 14:58:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDUyOTU3ODEzNTIwZDNlMzdkOWFkMTFhZDk0ZTI0MzhmNzM0MmRiYjk1YTNmNGYyMGEyNjA4Y2YyZmMzNTM2MTp2VGQ=: ]] 00:23:39.188 14:58:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDUyOTU3ODEzNTIwZDNlMzdkOWFkMTFhZDk0ZTI0MzhmNzM0MmRiYjk1YTNmNGYyMGEyNjA4Y2YyZmMzNTM2MTp2VGQ=: 00:23:39.188 14:58:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:23:39.188 14:58:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:39.188 14:58:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:39.188 14:58:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:39.188 14:58:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:39.188 14:58:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:39.188 14:58:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:39.188 14:58:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.188 14:58:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.188 14:58:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.188 14:58:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:39.188 14:58:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:39.188 14:58:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:39.188 14:58:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:39.188 14:58:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:39.188 14:58:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:39.188 14:58:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:39.188 14:58:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:39.188 14:58:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:39.188 14:58:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:39.188 14:58:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:39.188 14:58:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:39.188 14:58:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.188 14:58:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.752 nvme0n1 00:23:39.752 14:58:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.752 14:58:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:39.752 14:58:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.752 14:58:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.752 14:58:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:39.752 14:58:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.752 14:58:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:39.752 14:58:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:39.752 14:58:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.752 14:58:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.752 14:58:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.752 14:58:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:39.752 14:58:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:23:39.752 14:58:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:39.752 14:58:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:39.752 14:58:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:39.752 14:58:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:39.752 14:58:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGRkMmE5NWUzYTBjMDg5MjRmMTk5YzEwNzEyZTkwNTc4ZGFiMWU3NDYzODNmMzgxtqVLYw==: 00:23:39.752 14:58:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTIzNDhkYjdlMWYwNTk0YmQ3NDc1M2NmNGQ0NDk5ZmY5NTNkNWM2YWFkM2ZhZGU2TE1zGw==: 00:23:39.752 14:58:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:39.752 14:58:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:39.752 14:58:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGRkMmE5NWUzYTBjMDg5MjRmMTk5YzEwNzEyZTkwNTc4ZGFiMWU3NDYzODNmMzgxtqVLYw==: 00:23:39.752 14:58:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTIzNDhkYjdlMWYwNTk0YmQ3NDc1M2NmNGQ0NDk5ZmY5NTNkNWM2YWFkM2ZhZGU2TE1zGw==: ]] 00:23:39.752 14:58:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTIzNDhkYjdlMWYwNTk0YmQ3NDc1M2NmNGQ0NDk5ZmY5NTNkNWM2YWFkM2ZhZGU2TE1zGw==: 00:23:39.752 14:58:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:23:39.752 14:58:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:39.752 14:58:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:39.752 14:58:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:39.752 14:58:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:39.752 14:58:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:39.752 14:58:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:39.752 14:58:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.752 14:58:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.752 14:58:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.752 14:58:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:39.752 14:58:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:39.752 14:58:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:39.752 14:58:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:39.752 14:58:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:39.752 14:58:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:39.752 14:58:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:39.752 14:58:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:39.752 14:58:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:39.752 14:58:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:39.752 14:58:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:39.752 14:58:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:39.752 14:58:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.752 14:58:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.011 nvme0n1 00:23:40.011 14:58:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.011 14:58:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:40.011 14:58:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.011 14:58:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.011 14:58:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:40.011 14:58:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.011 14:58:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:40.011 14:58:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:40.011 14:58:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.011 14:58:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.011 14:58:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.011 14:58:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:40.012 14:58:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:23:40.012 14:58:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:40.012 14:58:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:40.012 14:58:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:40.012 14:58:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:40.012 14:58:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWIyZjA5NDZmZmVkMGVjMzM1M2QzMTcxNmFjNjBhODgBCIpz: 00:23:40.012 14:58:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmVmNDdkOTNjMjI1ZTliNDk1OGRjM2E5YTY3OWQxNzFpLeuz: 00:23:40.012 14:58:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:40.012 14:58:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:40.012 14:58:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWIyZjA5NDZmZmVkMGVjMzM1M2QzMTcxNmFjNjBhODgBCIpz: 00:23:40.012 14:58:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmVmNDdkOTNjMjI1ZTliNDk1OGRjM2E5YTY3OWQxNzFpLeuz: ]] 00:23:40.012 14:58:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmVmNDdkOTNjMjI1ZTliNDk1OGRjM2E5YTY3OWQxNzFpLeuz: 00:23:40.012 14:58:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:23:40.012 14:58:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:40.012 14:58:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:40.012 14:58:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:40.012 14:58:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:40.012 14:58:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:40.012 14:58:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:40.012 14:58:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.012 14:58:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.012 14:58:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.012 14:58:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:40.012 14:58:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:40.012 14:58:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:40.012 14:58:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:40.012 14:58:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:40.012 14:58:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:40.012 14:58:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:40.012 14:58:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:40.012 14:58:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:40.012 14:58:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:40.012 14:58:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:40.012 14:58:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:40.012 14:58:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.012 14:58:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.270 nvme0n1 00:23:40.270 14:58:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.270 14:58:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:40.270 14:58:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.270 14:58:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.270 14:58:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:40.270 14:58:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.528 14:58:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:40.528 14:58:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:40.528 14:58:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.528 14:58:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.528 14:58:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.528 14:58:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:40.528 14:58:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:23:40.528 14:58:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:40.528 14:58:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:40.528 14:58:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:40.528 14:58:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:40.528 14:58:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzcyYjg5ZWFhOWZlNDgzMzgxMDRiZjhmYWI1NzYwZTE4ZmI0MDJjMWZmOTFiMDAw7SZ+BA==: 00:23:40.528 14:58:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2Q5NjQ5ZTMwYjIwMWQzNTkyZmQ2YjczYTRlMWMwMWFNse/e: 00:23:40.528 14:58:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:40.528 14:58:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:40.528 14:58:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzcyYjg5ZWFhOWZlNDgzMzgxMDRiZjhmYWI1NzYwZTE4ZmI0MDJjMWZmOTFiMDAw7SZ+BA==: 00:23:40.528 14:58:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2Q5NjQ5ZTMwYjIwMWQzNTkyZmQ2YjczYTRlMWMwMWFNse/e: ]] 00:23:40.528 14:58:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2Q5NjQ5ZTMwYjIwMWQzNTkyZmQ2YjczYTRlMWMwMWFNse/e: 00:23:40.528 14:58:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:23:40.529 14:58:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:40.529 14:58:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:40.529 14:58:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:40.529 14:58:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:40.529 14:58:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:40.529 14:58:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:40.529 14:58:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.529 14:58:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.529 14:58:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.529 14:58:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:40.529 14:58:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:40.529 14:58:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:40.529 14:58:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:40.529 14:58:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:40.529 14:58:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:40.529 14:58:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:40.529 14:58:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:40.529 14:58:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:40.529 14:58:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:40.529 14:58:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:40.529 14:58:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:40.529 14:58:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.529 14:58:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.787 nvme0n1 00:23:40.787 14:58:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.787 14:58:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:40.787 14:58:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.787 14:58:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.787 14:58:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:40.787 14:58:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.787 14:58:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:40.787 14:58:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:40.787 14:58:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.787 14:58:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.787 14:58:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.787 14:58:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:40.787 14:58:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:23:40.787 14:58:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:40.787 14:58:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:40.787 14:58:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:40.787 14:58:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:40.787 14:58:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjc4NDhhMDRhM2FkYjFlNzEzMmIyOTliNmFhZWZiOGM3Zjc2ZDcyNmRhOThjZGJiZjY5OGEzMTlkYjY5OWRmNE6bBqE=: 00:23:40.787 14:58:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:40.787 14:58:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:40.787 14:58:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:40.787 14:58:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjc4NDhhMDRhM2FkYjFlNzEzMmIyOTliNmFhZWZiOGM3Zjc2ZDcyNmRhOThjZGJiZjY5OGEzMTlkYjY5OWRmNE6bBqE=: 00:23:40.787 14:58:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:40.787 14:58:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:23:40.787 14:58:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:40.787 14:58:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:40.787 14:58:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:40.787 14:58:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:40.787 14:58:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:40.787 14:58:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:40.787 14:58:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.787 14:58:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.787 14:58:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.787 14:58:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:40.787 14:58:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:40.787 14:58:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:40.787 14:58:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:40.787 14:58:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:40.787 14:58:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:40.787 14:58:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:40.787 14:58:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:40.787 14:58:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:40.787 14:58:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:40.787 14:58:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:40.787 14:58:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:40.787 14:58:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.787 14:58:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.045 nvme0n1 00:23:41.045 14:58:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.045 14:58:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:41.045 14:58:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.045 14:58:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.045 14:58:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:41.045 14:58:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.045 14:58:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:41.045 14:58:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:41.045 14:58:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.045 14:58:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.303 14:58:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.303 14:58:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:41.303 14:58:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:41.303 14:58:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:23:41.303 14:58:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:41.303 14:58:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:41.303 14:58:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:41.303 14:58:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:41.303 14:58:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDhmNDI5ODI1OTczYWRiMTMwNWRhZWY5Y2UyNTAzOTL6lXmu: 00:23:41.303 14:58:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDUyOTU3ODEzNTIwZDNlMzdkOWFkMTFhZDk0ZTI0MzhmNzM0MmRiYjk1YTNmNGYyMGEyNjA4Y2YyZmMzNTM2MTp2VGQ=: 00:23:41.303 14:58:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:41.303 14:58:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:41.303 14:58:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDhmNDI5ODI1OTczYWRiMTMwNWRhZWY5Y2UyNTAzOTL6lXmu: 00:23:41.303 14:58:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDUyOTU3ODEzNTIwZDNlMzdkOWFkMTFhZDk0ZTI0MzhmNzM0MmRiYjk1YTNmNGYyMGEyNjA4Y2YyZmMzNTM2MTp2VGQ=: ]] 00:23:41.303 14:58:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDUyOTU3ODEzNTIwZDNlMzdkOWFkMTFhZDk0ZTI0MzhmNzM0MmRiYjk1YTNmNGYyMGEyNjA4Y2YyZmMzNTM2MTp2VGQ=: 00:23:41.303 14:58:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:23:41.303 14:58:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:41.303 14:58:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:41.303 14:58:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:41.303 14:58:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:41.303 14:58:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:41.303 14:58:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:41.303 14:58:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.303 14:58:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.303 14:58:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.303 14:58:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:41.303 14:58:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:41.303 14:58:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:41.303 14:58:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:41.303 14:58:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:41.303 14:58:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:41.303 14:58:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:41.303 14:58:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:41.303 14:58:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:41.303 14:58:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:41.303 14:58:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:41.303 14:58:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:41.303 14:58:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.303 14:58:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.561 nvme0n1 00:23:41.561 14:58:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.561 14:58:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:41.561 14:58:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:41.561 14:58:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.561 14:58:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.561 14:58:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.561 14:58:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:41.561 14:58:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:41.561 14:58:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.561 14:58:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.561 14:58:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.561 14:58:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:41.561 14:58:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:23:41.561 14:58:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:41.561 14:58:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:41.561 14:58:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:41.561 14:58:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:41.561 14:58:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGRkMmE5NWUzYTBjMDg5MjRmMTk5YzEwNzEyZTkwNTc4ZGFiMWU3NDYzODNmMzgxtqVLYw==: 00:23:41.561 14:58:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTIzNDhkYjdlMWYwNTk0YmQ3NDc1M2NmNGQ0NDk5ZmY5NTNkNWM2YWFkM2ZhZGU2TE1zGw==: 00:23:41.561 14:58:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:41.561 14:58:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:41.561 14:58:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGRkMmE5NWUzYTBjMDg5MjRmMTk5YzEwNzEyZTkwNTc4ZGFiMWU3NDYzODNmMzgxtqVLYw==: 00:23:41.561 14:58:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTIzNDhkYjdlMWYwNTk0YmQ3NDc1M2NmNGQ0NDk5ZmY5NTNkNWM2YWFkM2ZhZGU2TE1zGw==: ]] 00:23:41.561 14:58:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTIzNDhkYjdlMWYwNTk0YmQ3NDc1M2NmNGQ0NDk5ZmY5NTNkNWM2YWFkM2ZhZGU2TE1zGw==: 00:23:41.561 14:58:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:23:41.561 14:58:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:41.562 14:58:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:41.562 14:58:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:41.562 14:58:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:41.562 14:58:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:41.562 14:58:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:41.562 14:58:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.562 14:58:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.819 14:58:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.819 14:58:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:41.819 14:58:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:41.819 14:58:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:41.819 14:58:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:41.819 14:58:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:41.819 14:58:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:41.819 14:58:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:41.819 14:58:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:41.819 14:58:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:41.819 14:58:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:41.819 14:58:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:41.819 14:58:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:41.819 14:58:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.820 14:58:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.077 nvme0n1 00:23:42.077 14:58:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.077 14:58:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:42.077 14:58:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.077 14:58:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.077 14:58:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:42.077 14:58:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.077 14:58:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:42.077 14:58:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:42.077 14:58:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.077 14:58:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.077 14:58:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.077 14:58:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:42.077 14:58:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:23:42.077 14:58:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:42.077 14:58:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:42.077 14:58:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:42.077 14:58:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:42.077 14:58:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWIyZjA5NDZmZmVkMGVjMzM1M2QzMTcxNmFjNjBhODgBCIpz: 00:23:42.077 14:58:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmVmNDdkOTNjMjI1ZTliNDk1OGRjM2E5YTY3OWQxNzFpLeuz: 00:23:42.077 14:58:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:42.077 14:58:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:42.077 14:58:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWIyZjA5NDZmZmVkMGVjMzM1M2QzMTcxNmFjNjBhODgBCIpz: 00:23:42.077 14:58:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmVmNDdkOTNjMjI1ZTliNDk1OGRjM2E5YTY3OWQxNzFpLeuz: ]] 00:23:42.077 14:58:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmVmNDdkOTNjMjI1ZTliNDk1OGRjM2E5YTY3OWQxNzFpLeuz: 00:23:42.077 14:58:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:23:42.077 14:58:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:42.077 14:58:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:42.334 14:58:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:42.334 14:58:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:42.334 14:58:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:42.334 14:58:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:42.334 14:58:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.334 14:58:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.334 14:58:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.334 14:58:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:42.334 14:58:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:42.334 14:58:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:42.334 14:58:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:42.334 14:58:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:42.334 14:58:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:42.334 14:58:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:42.334 14:58:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:42.334 14:58:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:42.334 14:58:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:42.334 14:58:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:42.334 14:58:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:42.334 14:58:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.334 14:58:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.593 nvme0n1 00:23:42.593 14:58:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.593 14:58:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:42.593 14:58:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:42.593 14:58:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.593 14:58:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.593 14:58:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.593 14:58:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:42.593 14:58:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:42.593 14:58:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.593 14:58:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.593 14:58:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.593 14:58:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:42.593 14:58:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:23:42.593 14:58:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:42.593 14:58:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:42.593 14:58:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:42.593 14:58:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:42.593 14:58:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzcyYjg5ZWFhOWZlNDgzMzgxMDRiZjhmYWI1NzYwZTE4ZmI0MDJjMWZmOTFiMDAw7SZ+BA==: 00:23:42.593 14:58:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2Q5NjQ5ZTMwYjIwMWQzNTkyZmQ2YjczYTRlMWMwMWFNse/e: 00:23:42.593 14:58:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:42.593 14:58:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:42.593 14:58:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzcyYjg5ZWFhOWZlNDgzMzgxMDRiZjhmYWI1NzYwZTE4ZmI0MDJjMWZmOTFiMDAw7SZ+BA==: 00:23:42.593 14:58:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2Q5NjQ5ZTMwYjIwMWQzNTkyZmQ2YjczYTRlMWMwMWFNse/e: ]] 00:23:42.593 14:58:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2Q5NjQ5ZTMwYjIwMWQzNTkyZmQ2YjczYTRlMWMwMWFNse/e: 00:23:42.593 14:58:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:23:42.593 14:58:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:42.593 14:58:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:42.593 14:58:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:42.593 14:58:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:42.593 14:58:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:42.593 14:58:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:42.593 14:58:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.593 14:58:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.593 14:58:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.593 14:58:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:42.593 14:58:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:42.593 14:58:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:42.593 14:58:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:42.593 14:58:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:42.593 14:58:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:42.593 14:58:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:42.593 14:58:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:42.593 14:58:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:42.593 14:58:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:42.593 14:58:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:42.593 14:58:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:42.593 14:58:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.593 14:58:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.159 nvme0n1 00:23:43.159 14:58:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.159 14:58:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:43.159 14:58:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:43.159 14:58:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.159 14:58:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.159 14:58:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.159 14:58:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:43.159 14:58:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:43.159 14:58:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.159 14:58:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.159 14:58:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.159 14:58:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:43.159 14:58:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:23:43.159 14:58:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:43.159 14:58:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:43.160 14:58:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:43.160 14:58:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:43.160 14:58:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjc4NDhhMDRhM2FkYjFlNzEzMmIyOTliNmFhZWZiOGM3Zjc2ZDcyNmRhOThjZGJiZjY5OGEzMTlkYjY5OWRmNE6bBqE=: 00:23:43.160 14:58:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:43.160 14:58:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:43.160 14:58:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:43.160 14:58:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjc4NDhhMDRhM2FkYjFlNzEzMmIyOTliNmFhZWZiOGM3Zjc2ZDcyNmRhOThjZGJiZjY5OGEzMTlkYjY5OWRmNE6bBqE=: 00:23:43.160 14:58:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:43.160 14:58:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:23:43.160 14:58:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:43.160 14:58:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:43.160 14:58:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:43.160 14:58:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:43.160 14:58:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:43.160 14:58:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:43.160 14:58:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.160 14:58:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.160 14:58:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.160 14:58:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:43.160 14:58:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:43.160 14:58:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:43.160 14:58:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:43.160 14:58:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:43.160 14:58:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:43.160 14:58:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:43.160 14:58:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:43.160 14:58:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:43.160 14:58:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:43.160 14:58:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:43.160 14:58:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:43.160 14:58:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.160 14:58:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.727 nvme0n1 00:23:43.727 14:58:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.727 14:58:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:43.727 14:58:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:43.727 14:58:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.727 14:58:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.727 14:58:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.727 14:58:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:43.727 14:58:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:43.727 14:58:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.727 14:58:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.727 14:58:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.727 14:58:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:43.727 14:58:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:43.727 14:58:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:23:43.727 14:58:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:43.727 14:58:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:43.727 14:58:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:43.727 14:58:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:43.727 14:58:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDhmNDI5ODI1OTczYWRiMTMwNWRhZWY5Y2UyNTAzOTL6lXmu: 00:23:43.727 14:58:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDUyOTU3ODEzNTIwZDNlMzdkOWFkMTFhZDk0ZTI0MzhmNzM0MmRiYjk1YTNmNGYyMGEyNjA4Y2YyZmMzNTM2MTp2VGQ=: 00:23:43.727 14:58:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:43.727 14:58:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:43.727 14:58:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDhmNDI5ODI1OTczYWRiMTMwNWRhZWY5Y2UyNTAzOTL6lXmu: 00:23:43.727 14:58:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDUyOTU3ODEzNTIwZDNlMzdkOWFkMTFhZDk0ZTI0MzhmNzM0MmRiYjk1YTNmNGYyMGEyNjA4Y2YyZmMzNTM2MTp2VGQ=: ]] 00:23:43.727 14:58:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDUyOTU3ODEzNTIwZDNlMzdkOWFkMTFhZDk0ZTI0MzhmNzM0MmRiYjk1YTNmNGYyMGEyNjA4Y2YyZmMzNTM2MTp2VGQ=: 00:23:43.727 14:58:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:23:43.727 14:58:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:43.727 14:58:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:43.727 14:58:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:43.727 14:58:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:43.727 14:58:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:43.727 14:58:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:43.727 14:58:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.727 14:58:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.727 14:58:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.727 14:58:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:43.727 14:58:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:43.727 14:58:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:43.727 14:58:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:43.727 14:58:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:43.727 14:58:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:43.727 14:58:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:43.727 14:58:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:43.727 14:58:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:43.727 14:58:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:43.727 14:58:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:43.727 14:58:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:43.727 14:58:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.727 14:58:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.336 nvme0n1 00:23:44.336 14:58:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.336 14:58:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:44.336 14:58:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:44.336 14:58:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.336 14:58:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.336 14:58:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.336 14:58:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:44.336 14:58:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:44.336 14:58:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.336 14:58:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.336 14:58:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.336 14:58:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:44.336 14:58:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:23:44.336 14:58:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:44.336 14:58:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:44.336 14:58:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:44.336 14:58:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:44.336 14:58:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGRkMmE5NWUzYTBjMDg5MjRmMTk5YzEwNzEyZTkwNTc4ZGFiMWU3NDYzODNmMzgxtqVLYw==: 00:23:44.336 14:58:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTIzNDhkYjdlMWYwNTk0YmQ3NDc1M2NmNGQ0NDk5ZmY5NTNkNWM2YWFkM2ZhZGU2TE1zGw==: 00:23:44.336 14:58:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:44.336 14:58:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:44.336 14:58:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGRkMmE5NWUzYTBjMDg5MjRmMTk5YzEwNzEyZTkwNTc4ZGFiMWU3NDYzODNmMzgxtqVLYw==: 00:23:44.336 14:58:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTIzNDhkYjdlMWYwNTk0YmQ3NDc1M2NmNGQ0NDk5ZmY5NTNkNWM2YWFkM2ZhZGU2TE1zGw==: ]] 00:23:44.336 14:58:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTIzNDhkYjdlMWYwNTk0YmQ3NDc1M2NmNGQ0NDk5ZmY5NTNkNWM2YWFkM2ZhZGU2TE1zGw==: 00:23:44.336 14:58:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:23:44.336 14:58:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:44.336 14:58:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:44.336 14:58:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:44.336 14:58:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:44.336 14:58:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:44.336 14:58:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:44.336 14:58:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.336 14:58:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.336 14:58:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.336 14:58:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:44.336 14:58:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:44.336 14:58:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:44.336 14:58:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:44.336 14:58:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:44.336 14:58:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:44.336 14:58:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:44.336 14:58:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:44.336 14:58:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:44.336 14:58:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:44.336 14:58:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:44.336 14:58:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:44.336 14:58:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.336 14:58:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.920 nvme0n1 00:23:44.920 14:58:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.920 14:58:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:44.920 14:58:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.920 14:58:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.920 14:58:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:44.920 14:58:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.920 14:58:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:44.920 14:58:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:44.920 14:58:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.920 14:58:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.920 14:58:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.921 14:58:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:44.921 14:58:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:23:44.921 14:58:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:44.921 14:58:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:44.921 14:58:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:45.199 14:58:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:45.199 14:58:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWIyZjA5NDZmZmVkMGVjMzM1M2QzMTcxNmFjNjBhODgBCIpz: 00:23:45.199 14:58:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmVmNDdkOTNjMjI1ZTliNDk1OGRjM2E5YTY3OWQxNzFpLeuz: 00:23:45.199 14:58:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:45.199 14:58:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:45.199 14:58:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWIyZjA5NDZmZmVkMGVjMzM1M2QzMTcxNmFjNjBhODgBCIpz: 00:23:45.199 14:58:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmVmNDdkOTNjMjI1ZTliNDk1OGRjM2E5YTY3OWQxNzFpLeuz: ]] 00:23:45.199 14:58:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmVmNDdkOTNjMjI1ZTliNDk1OGRjM2E5YTY3OWQxNzFpLeuz: 00:23:45.199 14:58:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:23:45.199 14:58:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:45.200 14:58:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:45.200 14:58:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:45.200 14:58:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:45.200 14:58:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:45.200 14:58:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:45.200 14:58:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.200 14:58:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.200 14:58:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.200 14:58:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:45.200 14:58:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:45.200 14:58:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:45.200 14:58:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:45.200 14:58:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:45.200 14:58:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:45.200 14:58:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:45.200 14:58:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:45.200 14:58:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:45.200 14:58:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:45.200 14:58:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:45.200 14:58:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:45.200 14:58:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.200 14:58:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.768 nvme0n1 00:23:45.768 14:58:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.768 14:58:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:45.768 14:58:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:45.768 14:58:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.768 14:58:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.768 14:58:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.768 14:58:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:45.768 14:58:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:45.768 14:58:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.768 14:58:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.768 14:58:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.768 14:58:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:45.768 14:58:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:23:45.768 14:58:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:45.768 14:58:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:45.768 14:58:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:45.768 14:58:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:45.768 14:58:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YzcyYjg5ZWFhOWZlNDgzMzgxMDRiZjhmYWI1NzYwZTE4ZmI0MDJjMWZmOTFiMDAw7SZ+BA==: 00:23:45.768 14:58:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2Q5NjQ5ZTMwYjIwMWQzNTkyZmQ2YjczYTRlMWMwMWFNse/e: 00:23:45.768 14:58:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:45.768 14:58:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:45.768 14:58:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YzcyYjg5ZWFhOWZlNDgzMzgxMDRiZjhmYWI1NzYwZTE4ZmI0MDJjMWZmOTFiMDAw7SZ+BA==: 00:23:45.768 14:58:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2Q5NjQ5ZTMwYjIwMWQzNTkyZmQ2YjczYTRlMWMwMWFNse/e: ]] 00:23:45.768 14:58:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2Q5NjQ5ZTMwYjIwMWQzNTkyZmQ2YjczYTRlMWMwMWFNse/e: 00:23:45.768 14:58:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:23:45.768 14:58:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:45.768 14:58:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:45.768 14:58:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:45.768 14:58:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:45.768 14:58:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:45.768 14:58:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:45.768 14:58:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.768 14:58:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.768 14:58:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.768 14:58:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:45.768 14:58:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:45.768 14:58:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:45.768 14:58:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:45.768 14:58:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:45.768 14:58:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:45.768 14:58:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:45.768 14:58:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:45.768 14:58:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:45.768 14:58:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:45.768 14:58:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:45.769 14:58:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:45.769 14:58:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.769 14:58:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.336 nvme0n1 00:23:46.336 14:58:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.336 14:58:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:46.336 14:58:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:46.336 14:58:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.336 14:58:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.336 14:58:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.336 14:58:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:46.336 14:58:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:46.336 14:58:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.336 14:58:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.336 14:58:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.336 14:58:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:46.336 14:58:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:23:46.336 14:58:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:46.336 14:58:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:46.336 14:58:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:46.336 14:58:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:46.336 14:58:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mjc4NDhhMDRhM2FkYjFlNzEzMmIyOTliNmFhZWZiOGM3Zjc2ZDcyNmRhOThjZGJiZjY5OGEzMTlkYjY5OWRmNE6bBqE=: 00:23:46.336 14:58:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:46.336 14:58:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:46.336 14:58:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:46.336 14:58:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mjc4NDhhMDRhM2FkYjFlNzEzMmIyOTliNmFhZWZiOGM3Zjc2ZDcyNmRhOThjZGJiZjY5OGEzMTlkYjY5OWRmNE6bBqE=: 00:23:46.336 14:58:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:46.336 14:58:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:23:46.336 14:58:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:46.336 14:58:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:46.336 14:58:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:46.336 14:58:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:46.336 14:58:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:46.336 14:58:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:46.336 14:58:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.336 14:58:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.336 14:58:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.336 14:58:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:46.336 14:58:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:46.336 14:58:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:46.336 14:58:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:46.336 14:58:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:46.336 14:58:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:46.336 14:58:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:46.336 14:58:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:46.336 14:58:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:46.336 14:58:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:46.336 14:58:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:46.336 14:58:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:46.336 14:58:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.337 14:58:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.273 nvme0n1 00:23:47.273 14:58:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.273 14:58:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:47.273 14:58:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:47.273 14:58:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.273 14:58:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.273 14:58:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.273 14:58:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:47.273 14:58:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:47.273 14:58:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.273 14:58:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.273 14:58:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.273 14:58:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:47.273 14:58:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:47.273 14:58:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:47.273 14:58:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:47.273 14:58:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:47.273 14:58:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGRkMmE5NWUzYTBjMDg5MjRmMTk5YzEwNzEyZTkwNTc4ZGFiMWU3NDYzODNmMzgxtqVLYw==: 00:23:47.273 14:58:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTIzNDhkYjdlMWYwNTk0YmQ3NDc1M2NmNGQ0NDk5ZmY5NTNkNWM2YWFkM2ZhZGU2TE1zGw==: 00:23:47.273 14:58:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:47.273 14:58:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:47.273 14:58:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGRkMmE5NWUzYTBjMDg5MjRmMTk5YzEwNzEyZTkwNTc4ZGFiMWU3NDYzODNmMzgxtqVLYw==: 00:23:47.273 14:58:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTIzNDhkYjdlMWYwNTk0YmQ3NDc1M2NmNGQ0NDk5ZmY5NTNkNWM2YWFkM2ZhZGU2TE1zGw==: ]] 00:23:47.273 14:58:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTIzNDhkYjdlMWYwNTk0YmQ3NDc1M2NmNGQ0NDk5ZmY5NTNkNWM2YWFkM2ZhZGU2TE1zGw==: 00:23:47.273 14:58:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:47.273 14:58:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.273 14:58:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.273 14:58:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.273 14:58:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:23:47.273 14:58:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:47.273 14:58:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:47.273 14:58:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:47.273 14:58:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:47.273 14:58:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:47.273 14:58:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:47.273 14:58:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:47.273 14:58:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:47.273 14:58:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:47.273 14:58:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:47.273 14:58:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:47.273 14:58:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:23:47.273 14:58:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:47.273 14:58:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:47.273 14:58:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:47.273 14:58:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:47.273 14:58:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:47.273 14:58:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:47.273 14:58:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.273 14:58:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.273 request: 00:23:47.273 { 00:23:47.273 "name": "nvme0", 00:23:47.273 "trtype": "rdma", 00:23:47.273 "traddr": "192.168.100.8", 00:23:47.273 "adrfam": "ipv4", 00:23:47.273 "trsvcid": "4420", 00:23:47.273 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:47.273 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:47.273 "prchk_reftag": false, 00:23:47.273 "prchk_guard": false, 00:23:47.273 "hdgst": false, 00:23:47.273 "ddgst": false, 00:23:47.273 "method": "bdev_nvme_attach_controller", 00:23:47.273 "req_id": 1 00:23:47.273 } 00:23:47.273 Got JSON-RPC error response 00:23:47.273 response: 00:23:47.273 { 00:23:47.273 "code": -5, 00:23:47.273 "message": "Input/output error" 00:23:47.273 } 00:23:47.273 14:58:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:47.273 14:58:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:23:47.273 14:58:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:47.273 14:58:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:47.273 14:58:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:47.273 14:58:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:23:47.273 14:58:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:23:47.273 14:58:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.273 14:58:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.273 14:58:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.273 14:58:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:23:47.273 14:58:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:23:47.273 14:58:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:47.273 14:58:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:47.273 14:58:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:47.273 14:58:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:47.273 14:58:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:47.273 14:58:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:47.273 14:58:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:47.273 14:58:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:47.273 14:58:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:47.273 14:58:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:47.273 14:58:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:47.273 14:58:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:23:47.273 14:58:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:47.273 14:58:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:47.273 14:58:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:47.273 14:58:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:47.273 14:58:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:47.273 14:58:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:47.273 14:58:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.273 14:58:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.273 request: 00:23:47.273 { 00:23:47.273 "name": "nvme0", 00:23:47.273 "trtype": "rdma", 00:23:47.273 "traddr": "192.168.100.8", 00:23:47.273 "adrfam": "ipv4", 00:23:47.273 "trsvcid": "4420", 00:23:47.273 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:47.273 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:47.273 "prchk_reftag": false, 00:23:47.273 "prchk_guard": false, 00:23:47.273 "hdgst": false, 00:23:47.273 "ddgst": false, 00:23:47.273 "dhchap_key": "key2", 00:23:47.273 "method": "bdev_nvme_attach_controller", 00:23:47.273 "req_id": 1 00:23:47.273 } 00:23:47.273 Got JSON-RPC error response 00:23:47.273 response: 00:23:47.273 { 00:23:47.273 "code": -5, 00:23:47.273 "message": "Input/output error" 00:23:47.273 } 00:23:47.273 14:58:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:47.273 14:58:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:23:47.273 14:58:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:47.273 14:58:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:47.273 14:58:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:47.273 14:58:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:23:47.273 14:58:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:23:47.273 14:58:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.273 14:58:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.531 14:58:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.531 14:58:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:23:47.531 14:58:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:23:47.531 14:58:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:47.531 14:58:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:47.531 14:58:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:47.531 14:58:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:47.531 14:58:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:47.531 14:58:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:47.531 14:58:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:47.531 14:58:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:47.531 14:58:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:47.531 14:58:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:47.531 14:58:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:47.531 14:58:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:23:47.531 14:58:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:47.531 14:58:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:47.531 14:58:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:47.531 14:58:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:47.531 14:58:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:47.531 14:58:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:47.531 14:58:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.531 14:58:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.531 request: 00:23:47.531 { 00:23:47.531 "name": "nvme0", 00:23:47.531 "trtype": "rdma", 00:23:47.531 "traddr": "192.168.100.8", 00:23:47.531 "adrfam": "ipv4", 00:23:47.531 "trsvcid": "4420", 00:23:47.531 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:47.531 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:47.531 "prchk_reftag": false, 00:23:47.531 "prchk_guard": false, 00:23:47.531 "hdgst": false, 00:23:47.531 "ddgst": false, 00:23:47.531 "dhchap_key": "key1", 00:23:47.531 "dhchap_ctrlr_key": "ckey2", 00:23:47.531 "method": "bdev_nvme_attach_controller", 00:23:47.531 "req_id": 1 00:23:47.531 } 00:23:47.531 Got JSON-RPC error response 00:23:47.531 response: 00:23:47.531 { 00:23:47.531 "code": -5, 00:23:47.531 "message": "Input/output error" 00:23:47.531 } 00:23:47.531 14:58:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:47.531 14:58:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:23:47.531 14:58:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:47.531 14:58:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:47.531 14:58:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:47.531 14:58:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:23:47.531 14:58:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:23:47.531 14:58:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:23:47.531 14:58:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:47.531 14:58:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:23:47.531 14:58:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:23:47.531 14:58:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:23:47.531 14:58:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:23:47.531 14:58:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:47.531 14:58:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:23:47.531 rmmod nvme_rdma 00:23:47.531 rmmod nvme_fabrics 00:23:47.531 14:58:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:47.531 14:58:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:23:47.531 14:58:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:23:47.531 14:58:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 2941811 ']' 00:23:47.531 14:58:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 2941811 00:23:47.531 14:58:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 2941811 ']' 00:23:47.531 14:58:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 2941811 00:23:47.531 14:58:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:23:47.531 14:58:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:47.531 14:58:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2941811 00:23:47.531 14:58:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:47.531 14:58:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:47.531 14:58:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2941811' 00:23:47.531 killing process with pid 2941811 00:23:47.531 14:58:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 2941811 00:23:47.531 14:58:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 2941811 00:23:47.790 14:58:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:47.790 14:58:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:23:47.790 14:58:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:23:47.790 14:58:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:47.790 14:58:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:23:47.790 14:58:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:23:47.790 14:58:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:23:47.790 14:58:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:47.790 14:58:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:47.790 14:58:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:47.790 14:58:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:47.790 14:58:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:23:47.790 14:58:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_rdma nvmet 00:23:47.790 14:58:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:23:50.333 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:23:50.333 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:23:50.615 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:23:50.615 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:23:50.615 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:23:50.615 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:23:50.615 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:23:50.615 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:23:50.615 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:23:50.615 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:23:50.615 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:23:50.615 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:23:50.615 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:23:50.615 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:23:50.615 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:23:50.615 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:23:51.986 0000:5f:00.0 (8086 0a54): nvme -> vfio-pci 00:23:52.244 14:58:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.fK2 /tmp/spdk.key-null.1hc /tmp/spdk.key-sha256.efu /tmp/spdk.key-sha384.j0s /tmp/spdk.key-sha512.TKv /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log 00:23:52.244 14:58:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:23:54.772 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:23:54.772 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:23:54.772 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:23:54.772 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:23:54.772 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:23:54.772 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:23:54.772 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:23:54.772 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:23:54.772 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:23:54.772 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:23:54.772 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:23:54.772 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:23:54.772 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:23:54.772 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:23:54.772 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:23:54.772 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:23:54.772 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:23:55.030 00:23:55.030 real 0m52.665s 00:23:55.030 user 0m48.175s 00:23:55.030 sys 0m11.987s 00:23:55.030 14:58:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:55.030 14:58:28 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.030 ************************************ 00:23:55.030 END TEST nvmf_auth_host 00:23:55.030 ************************************ 00:23:55.030 14:58:28 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:23:55.030 14:58:28 nvmf_rdma -- nvmf/nvmf.sh@107 -- # [[ rdma == \t\c\p ]] 00:23:55.030 14:58:28 nvmf_rdma -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:23:55.030 14:58:28 nvmf_rdma -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:23:55.030 14:58:28 nvmf_rdma -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:23:55.030 14:58:28 nvmf_rdma -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:23:55.030 14:58:28 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:55.030 14:58:28 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:55.030 14:58:28 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:23:55.030 ************************************ 00:23:55.030 START TEST nvmf_bdevperf 00:23:55.030 ************************************ 00:23:55.030 14:58:28 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:23:55.030 * Looking for test storage... 00:23:55.030 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:23:55.030 14:58:28 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:55.030 14:58:28 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:23:55.030 14:58:28 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:55.030 14:58:28 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:55.030 14:58:28 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:55.030 14:58:28 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:55.030 14:58:28 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:55.030 14:58:28 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:55.030 14:58:28 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:55.030 14:58:28 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:55.030 14:58:28 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:55.030 14:58:28 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:55.030 14:58:28 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:23:55.030 14:58:28 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:23:55.030 14:58:28 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:55.030 14:58:28 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:55.030 14:58:28 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:55.030 14:58:28 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:55.030 14:58:28 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:55.030 14:58:28 nvmf_rdma.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:55.030 14:58:28 nvmf_rdma.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:55.030 14:58:28 nvmf_rdma.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:55.030 14:58:28 nvmf_rdma.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.030 14:58:28 nvmf_rdma.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.030 14:58:28 nvmf_rdma.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.030 14:58:28 nvmf_rdma.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:23:55.030 14:58:28 nvmf_rdma.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.030 14:58:28 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:23:55.030 14:58:28 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:55.030 14:58:28 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:55.030 14:58:28 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:55.030 14:58:28 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:55.030 14:58:28 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:55.030 14:58:28 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:55.030 14:58:28 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:55.030 14:58:28 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:55.030 14:58:28 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:55.030 14:58:28 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:55.030 14:58:28 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:23:55.030 14:58:28 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:23:55.030 14:58:28 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:55.031 14:58:28 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:55.031 14:58:28 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:55.031 14:58:28 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:55.031 14:58:28 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:55.031 14:58:28 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:55.031 14:58:28 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:55.031 14:58:28 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:55.031 14:58:28 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:55.031 14:58:28 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:23:55.031 14:58:28 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:00.293 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:00.293 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:24:00.293 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:00.293 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:00.293 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:00.293 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:00.293 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:00.293 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:24:00.293 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:00.293 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:24:00.293 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:24:00.293 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:24:00.293 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:24:00.293 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:24:00.293 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:24:00.293 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:00.293 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:00.293 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:00.293 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:00.293 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:00.293 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:00.293 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:00.293 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:00.293 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:00.293 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:00.293 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:00.293 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:00.293 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:24:00.293 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:24:00.293 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:24:00.293 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:24:00.293 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:24:00.293 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:00.293 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:00.293 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:24:00.293 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:24:00.293 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:24:00.293 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:24:00.293 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:00.293 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:00.293 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:24:00.293 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:24:00.293 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:00.293 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:24:00.293 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:24:00.293 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:24:00.293 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:24:00.293 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:00.293 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:00.293 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:24:00.293 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:24:00.293 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:00.293 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:24:00.293 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:00.293 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:00.293 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:24:00.293 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:00.293 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:00.293 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:24:00.293 Found net devices under 0000:da:00.0: mlx_0_0 00:24:00.293 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:00.293 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:00.293 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:00.293 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:24:00.293 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:00.293 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:00.293 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:24:00.293 Found net devices under 0000:da:00.1: mlx_0_1 00:24:00.293 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:00.294 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:00.294 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:24:00.294 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:00.294 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:24:00.294 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:24:00.294 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@420 -- # rdma_device_init 00:24:00.294 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:24:00.294 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@58 -- # uname 00:24:00.294 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:24:00.294 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@62 -- # modprobe ib_cm 00:24:00.294 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@63 -- # modprobe ib_core 00:24:00.294 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@64 -- # modprobe ib_umad 00:24:00.294 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:24:00.294 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@66 -- # modprobe iw_cm 00:24:00.294 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:24:00.294 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:24:00.294 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@502 -- # allocate_nic_ips 00:24:00.294 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:00.294 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@73 -- # get_rdma_if_list 00:24:00.294 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:00.294 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:24:00.294 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:24:00.294 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:00.294 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:24:00.294 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:00.294 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:00.294 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:00.294 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@104 -- # echo mlx_0_0 00:24:00.294 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@105 -- # continue 2 00:24:00.294 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:00.294 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:00.294 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:00.294 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:00.294 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:00.294 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@104 -- # echo mlx_0_1 00:24:00.294 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@105 -- # continue 2 00:24:00.294 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:24:00.294 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:24:00.294 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:24:00.294 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:24:00.294 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:00.294 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:00.294 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:24:00.294 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:24:00.294 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:24:00.294 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:00.294 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:24:00.294 altname enp218s0f0np0 00:24:00.294 altname ens818f0np0 00:24:00.294 inet 192.168.100.8/24 scope global mlx_0_0 00:24:00.294 valid_lft forever preferred_lft forever 00:24:00.294 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:24:00.294 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:24:00.294 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:24:00.294 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:24:00.294 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:00.294 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:00.294 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:24:00.294 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:24:00.294 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:24:00.294 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:00.294 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:24:00.294 altname enp218s0f1np1 00:24:00.294 altname ens818f1np1 00:24:00.294 inet 192.168.100.9/24 scope global mlx_0_1 00:24:00.294 valid_lft forever preferred_lft forever 00:24:00.294 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:24:00.294 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:00.294 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:00.294 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:24:00.294 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:24:00.294 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@86 -- # get_rdma_if_list 00:24:00.294 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:00.294 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:24:00.294 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:24:00.294 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:00.551 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:24:00.551 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:00.551 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:00.551 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:00.551 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@104 -- # echo mlx_0_0 00:24:00.551 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@105 -- # continue 2 00:24:00.551 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:00.551 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:00.551 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:00.551 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:00.551 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:00.551 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@104 -- # echo mlx_0_1 00:24:00.551 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@105 -- # continue 2 00:24:00.551 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:24:00.551 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:24:00.551 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:24:00.551 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:24:00.551 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:00.551 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:00.551 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:24:00.551 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:24:00.551 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:24:00.551 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:24:00.551 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:00.551 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:00.551 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:24:00.551 192.168.100.9' 00:24:00.552 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:24:00.552 192.168.100.9' 00:24:00.552 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@457 -- # head -n 1 00:24:00.552 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:00.552 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:24:00.552 192.168.100.9' 00:24:00.552 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@458 -- # tail -n +2 00:24:00.552 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@458 -- # head -n 1 00:24:00.552 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:00.552 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:24:00.552 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:00.552 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:24:00.552 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:24:00.552 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:24:00.552 14:58:34 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:24:00.552 14:58:34 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:24:00.552 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:00.552 14:58:34 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:00.552 14:58:34 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:00.552 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2955466 00:24:00.552 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2955466 00:24:00.552 14:58:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:00.552 14:58:34 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 2955466 ']' 00:24:00.552 14:58:34 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:00.552 14:58:34 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:00.552 14:58:34 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:00.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:00.552 14:58:34 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:00.552 14:58:34 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:00.552 [2024-07-15 14:58:34.331906] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:24:00.552 [2024-07-15 14:58:34.331948] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:00.552 EAL: No free 2048 kB hugepages reported on node 1 00:24:00.552 [2024-07-15 14:58:34.385639] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:00.552 [2024-07-15 14:58:34.463722] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:00.552 [2024-07-15 14:58:34.463761] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:00.552 [2024-07-15 14:58:34.463768] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:00.552 [2024-07-15 14:58:34.463774] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:00.552 [2024-07-15 14:58:34.463779] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:00.552 [2024-07-15 14:58:34.463889] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:00.552 [2024-07-15 14:58:34.463998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:00.552 [2024-07-15 14:58:34.463999] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:01.481 14:58:35 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:01.481 14:58:35 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:24:01.481 14:58:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:01.481 14:58:35 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:01.481 14:58:35 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:01.481 14:58:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:01.481 14:58:35 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:24:01.481 14:58:35 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.481 14:58:35 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:01.481 [2024-07-15 14:58:35.200393] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1871200/0x18756f0) succeed. 00:24:01.481 [2024-07-15 14:58:35.209455] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x18727a0/0x18b6d80) succeed. 00:24:01.481 14:58:35 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.481 14:58:35 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:01.481 14:58:35 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.481 14:58:35 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:01.481 Malloc0 00:24:01.481 14:58:35 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.481 14:58:35 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:01.481 14:58:35 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.481 14:58:35 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:01.481 14:58:35 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.481 14:58:35 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:01.481 14:58:35 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.481 14:58:35 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:01.481 14:58:35 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.481 14:58:35 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:24:01.481 14:58:35 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.481 14:58:35 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:01.481 [2024-07-15 14:58:35.350344] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:01.481 14:58:35 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.481 14:58:35 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:24:01.481 14:58:35 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:24:01.481 14:58:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:24:01.481 14:58:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:24:01.481 14:58:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:01.481 14:58:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:01.481 { 00:24:01.481 "params": { 00:24:01.481 "name": "Nvme$subsystem", 00:24:01.481 "trtype": "$TEST_TRANSPORT", 00:24:01.481 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:01.481 "adrfam": "ipv4", 00:24:01.481 "trsvcid": "$NVMF_PORT", 00:24:01.482 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:01.482 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:01.482 "hdgst": ${hdgst:-false}, 00:24:01.482 "ddgst": ${ddgst:-false} 00:24:01.482 }, 00:24:01.482 "method": "bdev_nvme_attach_controller" 00:24:01.482 } 00:24:01.482 EOF 00:24:01.482 )") 00:24:01.482 14:58:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:24:01.482 14:58:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:24:01.482 14:58:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:24:01.482 14:58:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:01.482 "params": { 00:24:01.482 "name": "Nvme1", 00:24:01.482 "trtype": "rdma", 00:24:01.482 "traddr": "192.168.100.8", 00:24:01.482 "adrfam": "ipv4", 00:24:01.482 "trsvcid": "4420", 00:24:01.482 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:01.482 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:01.482 "hdgst": false, 00:24:01.482 "ddgst": false 00:24:01.482 }, 00:24:01.482 "method": "bdev_nvme_attach_controller" 00:24:01.482 }' 00:24:01.482 [2024-07-15 14:58:35.399431] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:24:01.482 [2024-07-15 14:58:35.399474] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2955707 ] 00:24:01.739 EAL: No free 2048 kB hugepages reported on node 1 00:24:01.739 [2024-07-15 14:58:35.454535] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:01.739 [2024-07-15 14:58:35.529029] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:01.997 Running I/O for 1 seconds... 00:24:02.928 00:24:02.928 Latency(us) 00:24:02.928 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:02.928 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:02.928 Verification LBA range: start 0x0 length 0x4000 00:24:02.928 Nvme1n1 : 1.01 17938.81 70.07 0.00 0.00 7096.36 2574.63 11671.65 00:24:02.928 =================================================================================================================== 00:24:02.928 Total : 17938.81 70.07 0.00 0.00 7096.36 2574.63 11671.65 00:24:03.185 14:58:36 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2955943 00:24:03.185 14:58:36 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:24:03.185 14:58:36 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:24:03.185 14:58:36 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:24:03.185 14:58:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:24:03.185 14:58:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:24:03.185 14:58:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:03.185 14:58:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:03.185 { 00:24:03.185 "params": { 00:24:03.185 "name": "Nvme$subsystem", 00:24:03.185 "trtype": "$TEST_TRANSPORT", 00:24:03.185 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:03.185 "adrfam": "ipv4", 00:24:03.185 "trsvcid": "$NVMF_PORT", 00:24:03.185 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:03.185 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:03.185 "hdgst": ${hdgst:-false}, 00:24:03.185 "ddgst": ${ddgst:-false} 00:24:03.185 }, 00:24:03.185 "method": "bdev_nvme_attach_controller" 00:24:03.185 } 00:24:03.185 EOF 00:24:03.185 )") 00:24:03.185 14:58:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:24:03.185 14:58:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:24:03.185 14:58:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:24:03.185 14:58:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:03.185 "params": { 00:24:03.185 "name": "Nvme1", 00:24:03.185 "trtype": "rdma", 00:24:03.185 "traddr": "192.168.100.8", 00:24:03.185 "adrfam": "ipv4", 00:24:03.185 "trsvcid": "4420", 00:24:03.185 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:03.185 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:03.185 "hdgst": false, 00:24:03.185 "ddgst": false 00:24:03.185 }, 00:24:03.185 "method": "bdev_nvme_attach_controller" 00:24:03.185 }' 00:24:03.185 [2024-07-15 14:58:36.961153] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:24:03.185 [2024-07-15 14:58:36.961204] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2955943 ] 00:24:03.185 EAL: No free 2048 kB hugepages reported on node 1 00:24:03.185 [2024-07-15 14:58:37.016609] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:03.185 [2024-07-15 14:58:37.086249] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:03.444 Running I/O for 15 seconds... 00:24:06.721 14:58:39 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2955466 00:24:06.721 14:58:39 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:24:07.289 [2024-07-15 14:58:40.951375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:120136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ac000 len:0x1000 key:0x183f00 00:24:07.289 [2024-07-15 14:58:40.951412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.289 [2024-07-15 14:58:40.951431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:120144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075aa000 len:0x1000 key:0x183f00 00:24:07.289 [2024-07-15 14:58:40.951454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.289 [2024-07-15 14:58:40.951464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:120152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a8000 len:0x1000 key:0x183f00 00:24:07.289 [2024-07-15 14:58:40.951470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.289 [2024-07-15 14:58:40.951478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:120160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a6000 len:0x1000 key:0x183f00 00:24:07.289 [2024-07-15 14:58:40.951488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.290 [2024-07-15 14:58:40.951496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:120168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a4000 len:0x1000 key:0x183f00 00:24:07.290 [2024-07-15 14:58:40.951503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.290 [2024-07-15 14:58:40.951511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:120176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a2000 len:0x1000 key:0x183f00 00:24:07.290 [2024-07-15 14:58:40.951518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.290 [2024-07-15 14:58:40.951526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:120184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a0000 len:0x1000 key:0x183f00 00:24:07.290 [2024-07-15 14:58:40.951532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.290 [2024-07-15 14:58:40.951543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:120192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x183f00 00:24:07.290 [2024-07-15 14:58:40.951549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.290 [2024-07-15 14:58:40.951557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:120200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x183f00 00:24:07.290 [2024-07-15 14:58:40.951563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.290 [2024-07-15 14:58:40.951571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:120208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x183f00 00:24:07.290 [2024-07-15 14:58:40.951579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.290 [2024-07-15 14:58:40.951587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:120216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x183f00 00:24:07.290 [2024-07-15 14:58:40.951593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.290 [2024-07-15 14:58:40.951601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:120224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x183f00 00:24:07.290 [2024-07-15 14:58:40.951607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.290 [2024-07-15 14:58:40.951615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:120232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x183f00 00:24:07.290 [2024-07-15 14:58:40.951621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.290 [2024-07-15 14:58:40.951629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:120240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x183f00 00:24:07.290 [2024-07-15 14:58:40.951635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.290 [2024-07-15 14:58:40.951643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:120248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x183f00 00:24:07.290 [2024-07-15 14:58:40.951650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.290 [2024-07-15 14:58:40.951658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:120256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x183f00 00:24:07.290 [2024-07-15 14:58:40.951664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.290 [2024-07-15 14:58:40.951673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:120264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x183f00 00:24:07.290 [2024-07-15 14:58:40.951679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.290 [2024-07-15 14:58:40.951687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:120272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x183f00 00:24:07.290 [2024-07-15 14:58:40.951693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.290 [2024-07-15 14:58:40.951701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:120280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x183f00 00:24:07.290 [2024-07-15 14:58:40.951706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.290 [2024-07-15 14:58:40.951714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:120288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x183f00 00:24:07.290 [2024-07-15 14:58:40.951720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.290 [2024-07-15 14:58:40.951728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:120296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x183f00 00:24:07.290 [2024-07-15 14:58:40.951734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.290 [2024-07-15 14:58:40.951742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:120304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x183f00 00:24:07.290 [2024-07-15 14:58:40.951748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.290 [2024-07-15 14:58:40.951756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:120312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x183f00 00:24:07.290 [2024-07-15 14:58:40.951762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.290 [2024-07-15 14:58:40.951770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:120320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x183f00 00:24:07.290 [2024-07-15 14:58:40.951776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.290 [2024-07-15 14:58:40.951784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:120328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x183f00 00:24:07.290 [2024-07-15 14:58:40.951790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.290 [2024-07-15 14:58:40.951799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:120336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x183f00 00:24:07.290 [2024-07-15 14:58:40.951805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.290 [2024-07-15 14:58:40.951815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:120344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x183f00 00:24:07.290 [2024-07-15 14:58:40.951821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.290 [2024-07-15 14:58:40.951829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:120352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x183f00 00:24:07.290 [2024-07-15 14:58:40.951835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.290 [2024-07-15 14:58:40.951843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:120360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x183f00 00:24:07.290 [2024-07-15 14:58:40.951849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.290 [2024-07-15 14:58:40.951857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:120368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x183f00 00:24:07.290 [2024-07-15 14:58:40.951863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.290 [2024-07-15 14:58:40.951871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:120376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x183f00 00:24:07.290 [2024-07-15 14:58:40.951877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.290 [2024-07-15 14:58:40.951884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:120384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x183f00 00:24:07.290 [2024-07-15 14:58:40.951890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.290 [2024-07-15 14:58:40.951898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:120392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x183f00 00:24:07.290 [2024-07-15 14:58:40.951905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.290 [2024-07-15 14:58:40.951913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:120400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x183f00 00:24:07.290 [2024-07-15 14:58:40.951919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.290 [2024-07-15 14:58:40.951927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:120408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x183f00 00:24:07.290 [2024-07-15 14:58:40.951933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.290 [2024-07-15 14:58:40.951941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:120416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x183f00 00:24:07.290 [2024-07-15 14:58:40.951947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.290 [2024-07-15 14:58:40.951955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:120424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x183f00 00:24:07.290 [2024-07-15 14:58:40.951961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.290 [2024-07-15 14:58:40.951970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:120432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x183f00 00:24:07.290 [2024-07-15 14:58:40.951976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.290 [2024-07-15 14:58:40.951984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x183f00 00:24:07.290 [2024-07-15 14:58:40.951990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.291 [2024-07-15 14:58:40.951998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:120448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x183f00 00:24:07.291 [2024-07-15 14:58:40.952004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.291 [2024-07-15 14:58:40.952012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:120456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x183f00 00:24:07.291 [2024-07-15 14:58:40.952018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.291 [2024-07-15 14:58:40.952026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:120464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x183f00 00:24:07.291 [2024-07-15 14:58:40.952032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.291 [2024-07-15 14:58:40.952040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:120472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x183f00 00:24:07.291 [2024-07-15 14:58:40.952047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.291 [2024-07-15 14:58:40.952054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:120480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x183f00 00:24:07.291 [2024-07-15 14:58:40.952061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.291 [2024-07-15 14:58:40.952069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:120488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x183f00 00:24:07.291 [2024-07-15 14:58:40.952075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.291 [2024-07-15 14:58:40.952083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:120496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x183f00 00:24:07.291 [2024-07-15 14:58:40.952089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.291 [2024-07-15 14:58:40.952096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:120504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x183f00 00:24:07.291 [2024-07-15 14:58:40.952103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.291 [2024-07-15 14:58:40.952111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:120512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x183f00 00:24:07.291 [2024-07-15 14:58:40.952117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.291 [2024-07-15 14:58:40.952124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:120520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x183f00 00:24:07.291 [2024-07-15 14:58:40.952132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.291 [2024-07-15 14:58:40.952140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:120528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x183f00 00:24:07.291 [2024-07-15 14:58:40.952146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.291 [2024-07-15 14:58:40.952154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:120536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x183f00 00:24:07.291 [2024-07-15 14:58:40.952161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.291 [2024-07-15 14:58:40.952169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:120544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x183f00 00:24:07.291 [2024-07-15 14:58:40.952175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.291 [2024-07-15 14:58:40.952182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:120552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x183f00 00:24:07.291 [2024-07-15 14:58:40.952188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.291 [2024-07-15 14:58:40.952196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:120560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x183f00 00:24:07.291 [2024-07-15 14:58:40.952202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.291 [2024-07-15 14:58:40.952210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:120568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x183f00 00:24:07.291 [2024-07-15 14:58:40.952216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.291 [2024-07-15 14:58:40.952224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:120576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x183f00 00:24:07.291 [2024-07-15 14:58:40.952230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.291 [2024-07-15 14:58:40.952238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:120584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x183f00 00:24:07.291 [2024-07-15 14:58:40.952244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.291 [2024-07-15 14:58:40.952252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:120592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x183f00 00:24:07.291 [2024-07-15 14:58:40.952258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.291 [2024-07-15 14:58:40.952265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:120600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x183f00 00:24:07.291 [2024-07-15 14:58:40.952272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.291 [2024-07-15 14:58:40.952279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:120608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x183f00 00:24:07.291 [2024-07-15 14:58:40.952285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.291 [2024-07-15 14:58:40.952295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:120616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x183f00 00:24:07.291 [2024-07-15 14:58:40.952301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.291 [2024-07-15 14:58:40.952309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:120624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x183f00 00:24:07.291 [2024-07-15 14:58:40.952315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.291 [2024-07-15 14:58:40.952322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:120632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x183f00 00:24:07.291 [2024-07-15 14:58:40.952329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.291 [2024-07-15 14:58:40.952336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:120640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x183f00 00:24:07.291 [2024-07-15 14:58:40.952342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.291 [2024-07-15 14:58:40.952350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:120648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x183f00 00:24:07.291 [2024-07-15 14:58:40.952356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.291 [2024-07-15 14:58:40.952367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:120656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x183f00 00:24:07.291 [2024-07-15 14:58:40.952373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.291 [2024-07-15 14:58:40.952383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:120664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x183f00 00:24:07.291 [2024-07-15 14:58:40.952389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.291 [2024-07-15 14:58:40.952397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:120672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x183f00 00:24:07.291 [2024-07-15 14:58:40.952403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.291 [2024-07-15 14:58:40.952411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:120680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x183f00 00:24:07.291 [2024-07-15 14:58:40.952417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.291 [2024-07-15 14:58:40.952425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:120688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x183f00 00:24:07.291 [2024-07-15 14:58:40.952432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.291 [2024-07-15 14:58:40.952440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:120696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x183f00 00:24:07.291 [2024-07-15 14:58:40.952446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.291 [2024-07-15 14:58:40.952455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:120704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x183f00 00:24:07.291 [2024-07-15 14:58:40.952461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.291 [2024-07-15 14:58:40.952469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:120712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x183f00 00:24:07.291 [2024-07-15 14:58:40.952475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.291 [2024-07-15 14:58:40.952483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:120720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x183f00 00:24:07.291 [2024-07-15 14:58:40.952489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.291 [2024-07-15 14:58:40.952497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:120728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x183f00 00:24:07.291 [2024-07-15 14:58:40.952503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.291 [2024-07-15 14:58:40.952511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:120736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x183f00 00:24:07.291 [2024-07-15 14:58:40.952517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.291 [2024-07-15 14:58:40.952525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:120744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x183f00 00:24:07.291 [2024-07-15 14:58:40.952531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.291 [2024-07-15 14:58:40.952542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:120752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x183f00 00:24:07.292 [2024-07-15 14:58:40.952550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.292 [2024-07-15 14:58:40.952558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:120760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x183f00 00:24:07.292 [2024-07-15 14:58:40.952565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.292 [2024-07-15 14:58:40.952572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:120768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x183f00 00:24:07.292 [2024-07-15 14:58:40.952579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.292 [2024-07-15 14:58:40.952586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:120776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x183f00 00:24:07.292 [2024-07-15 14:58:40.952592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.292 [2024-07-15 14:58:40.952600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:120784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x183f00 00:24:07.292 [2024-07-15 14:58:40.952606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.292 [2024-07-15 14:58:40.952614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:120792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0x183f00 00:24:07.292 [2024-07-15 14:58:40.952622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.292 [2024-07-15 14:58:40.952631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:120800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x183f00 00:24:07.292 [2024-07-15 14:58:40.952637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.292 [2024-07-15 14:58:40.952645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:120808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x183f00 00:24:07.292 [2024-07-15 14:58:40.952651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.292 [2024-07-15 14:58:40.952659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:120816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x183f00 00:24:07.292 [2024-07-15 14:58:40.952665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.292 [2024-07-15 14:58:40.952673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:120824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x183f00 00:24:07.292 [2024-07-15 14:58:40.952680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.292 [2024-07-15 14:58:40.952688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:120832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.292 [2024-07-15 14:58:40.952693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.292 [2024-07-15 14:58:40.952701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:120840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.292 [2024-07-15 14:58:40.952707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.292 [2024-07-15 14:58:40.952714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:120848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.292 [2024-07-15 14:58:40.952720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.292 [2024-07-15 14:58:40.952728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:120856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.292 [2024-07-15 14:58:40.952734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.292 [2024-07-15 14:58:40.952741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:120864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.292 [2024-07-15 14:58:40.952747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.292 [2024-07-15 14:58:40.952756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:120872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.292 [2024-07-15 14:58:40.952762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.292 [2024-07-15 14:58:40.952770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:120880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.292 [2024-07-15 14:58:40.952776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.292 [2024-07-15 14:58:40.952784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:120888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.292 [2024-07-15 14:58:40.952792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.292 [2024-07-15 14:58:40.952800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:120896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.292 [2024-07-15 14:58:40.952805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.292 [2024-07-15 14:58:40.952813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:120904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.292 [2024-07-15 14:58:40.952819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.292 [2024-07-15 14:58:40.952826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:120912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.292 [2024-07-15 14:58:40.952833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.292 [2024-07-15 14:58:40.952840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:120920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.292 [2024-07-15 14:58:40.952847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.292 [2024-07-15 14:58:40.952854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:120928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.292 [2024-07-15 14:58:40.952860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.292 [2024-07-15 14:58:40.952868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:120936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.292 [2024-07-15 14:58:40.952874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.292 [2024-07-15 14:58:40.952881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:120944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.292 [2024-07-15 14:58:40.952887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.292 [2024-07-15 14:58:40.952895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:120952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.292 [2024-07-15 14:58:40.952901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.292 [2024-07-15 14:58:40.952909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:120960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.292 [2024-07-15 14:58:40.952914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.292 [2024-07-15 14:58:40.952922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:120968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.292 [2024-07-15 14:58:40.952927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.292 [2024-07-15 14:58:40.952935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:120976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.292 [2024-07-15 14:58:40.952942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.292 [2024-07-15 14:58:40.952950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:120984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.292 [2024-07-15 14:58:40.952960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.292 [2024-07-15 14:58:40.952969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:120992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.292 [2024-07-15 14:58:40.952975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.292 [2024-07-15 14:58:40.952983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:121000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.292 [2024-07-15 14:58:40.952989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.292 [2024-07-15 14:58:40.952998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:121008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.292 [2024-07-15 14:58:40.953004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.292 [2024-07-15 14:58:40.953011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:121016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.292 [2024-07-15 14:58:40.953017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.292 [2024-07-15 14:58:40.953025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:121024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.292 [2024-07-15 14:58:40.953031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.292 [2024-07-15 14:58:40.953038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:121032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.292 [2024-07-15 14:58:40.953045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.292 [2024-07-15 14:58:40.953052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:121040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.292 [2024-07-15 14:58:40.953059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.292 [2024-07-15 14:58:40.953066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:121048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.292 [2024-07-15 14:58:40.953072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.292 [2024-07-15 14:58:40.953080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:121056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.292 [2024-07-15 14:58:40.953085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.292 [2024-07-15 14:58:40.953093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:121064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.292 [2024-07-15 14:58:40.953099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.292 [2024-07-15 14:58:40.953106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:121072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.292 [2024-07-15 14:58:40.953113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.292 [2024-07-15 14:58:40.953120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:121080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.293 [2024-07-15 14:58:40.953126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.293 [2024-07-15 14:58:40.953135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:121088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.293 [2024-07-15 14:58:40.953141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.293 [2024-07-15 14:58:40.953148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:121096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.293 [2024-07-15 14:58:40.953155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.293 [2024-07-15 14:58:40.953163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:121104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.293 [2024-07-15 14:58:40.953169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.293 [2024-07-15 14:58:40.953177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:121112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.293 [2024-07-15 14:58:40.953183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.293 [2024-07-15 14:58:40.953190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:121120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.293 [2024-07-15 14:58:40.953196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.293 [2024-07-15 14:58:40.953205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:121128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.293 [2024-07-15 14:58:40.953212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.293 [2024-07-15 14:58:40.961634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:121136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.293 [2024-07-15 14:58:40.961642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.293 [2024-07-15 14:58:40.961651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:121144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:07.293 [2024-07-15 14:58:40.961658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ad927000 sqhd:52b0 p:0 m:0 dnr:0 00:24:07.293 [2024-07-15 14:58:40.963505] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:07.293 [2024-07-15 14:58:40.963517] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:07.293 [2024-07-15 14:58:40.963524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121152 len:8 PRP1 0x0 PRP2 0x0 00:24:07.293 [2024-07-15 14:58:40.963531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.293 [2024-07-15 14:58:40.963574] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4980 was disconnected and freed. reset controller. 00:24:07.293 [2024-07-15 14:58:40.963599] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:07.293 [2024-07-15 14:58:40.963607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.293 [2024-07-15 14:58:40.963614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:07.293 [2024-07-15 14:58:40.963620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.293 [2024-07-15 14:58:40.963627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:07.293 [2024-07-15 14:58:40.963635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.293 [2024-07-15 14:58:40.963642] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:07.293 [2024-07-15 14:58:40.963648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.293 [2024-07-15 14:58:40.980271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:07.293 [2024-07-15 14:58:40.980314] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:07.293 [2024-07-15 14:58:40.980337] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:07.293 [2024-07-15 14:58:40.983435] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:07.293 [2024-07-15 14:58:40.986649] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:07.293 [2024-07-15 14:58:40.986695] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:07.293 [2024-07-15 14:58:40.986714] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:24:08.228 [2024-07-15 14:58:41.990892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:08.228 [2024-07-15 14:58:41.990944] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:08.228 [2024-07-15 14:58:41.991520] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:08.228 [2024-07-15 14:58:41.991527] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:08.228 [2024-07-15 14:58:41.991534] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:24:08.228 [2024-07-15 14:58:41.993920] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:08.228 [2024-07-15 14:58:41.994231] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:08.228 [2024-07-15 14:58:42.006738] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:08.228 [2024-07-15 14:58:42.009698] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:08.228 [2024-07-15 14:58:42.009714] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:08.228 [2024-07-15 14:58:42.009720] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:24:09.161 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2955466 Killed "${NVMF_APP[@]}" "$@" 00:24:09.161 14:58:42 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:24:09.161 14:58:42 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:24:09.161 14:58:42 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:09.161 14:58:42 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:09.161 14:58:42 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:09.161 14:58:42 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2956950 00:24:09.161 14:58:42 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:09.161 14:58:42 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2956950 00:24:09.161 14:58:42 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 2956950 ']' 00:24:09.161 14:58:42 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:09.161 14:58:42 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:09.161 14:58:42 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:09.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:09.161 14:58:42 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:09.161 14:58:42 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:09.161 [2024-07-15 14:58:42.976812] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:24:09.161 [2024-07-15 14:58:42.976852] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:09.161 EAL: No free 2048 kB hugepages reported on node 1 00:24:09.161 [2024-07-15 14:58:43.013528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:09.161 [2024-07-15 14:58:43.013559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.161 [2024-07-15 14:58:43.013734] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.161 [2024-07-15 14:58:43.013743] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.161 [2024-07-15 14:58:43.013751] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:24:09.161 [2024-07-15 14:58:43.015713] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:09.161 [2024-07-15 14:58:43.016517] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.161 [2024-07-15 14:58:43.028688] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.161 [2024-07-15 14:58:43.031173] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:09.161 [2024-07-15 14:58:43.031190] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:09.161 [2024-07-15 14:58:43.031196] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:24:09.161 [2024-07-15 14:58:43.033501] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:09.419 [2024-07-15 14:58:43.113420] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:09.419 [2024-07-15 14:58:43.113454] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:09.419 [2024-07-15 14:58:43.113461] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:09.419 [2024-07-15 14:58:43.113466] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:09.419 [2024-07-15 14:58:43.113471] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:09.419 [2024-07-15 14:58:43.113509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:09.419 [2024-07-15 14:58:43.113614] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:09.419 [2024-07-15 14:58:43.113616] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:09.983 14:58:43 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:09.983 14:58:43 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:24:09.983 14:58:43 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:09.983 14:58:43 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:09.983 14:58:43 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:09.983 14:58:43 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:09.983 14:58:43 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:24:09.983 14:58:43 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.983 14:58:43 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:09.983 [2024-07-15 14:58:43.848004] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1f7c200/0x1f806f0) succeed. 00:24:09.983 [2024-07-15 14:58:43.857066] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1f7d7a0/0x1fc1d80) succeed. 00:24:10.240 14:58:43 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.240 14:58:43 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:10.240 14:58:43 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.240 14:58:43 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:10.240 Malloc0 00:24:10.240 14:58:43 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.240 14:58:43 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:10.240 14:58:43 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.240 14:58:43 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:10.240 14:58:43 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.240 14:58:43 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:10.240 14:58:43 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.240 14:58:43 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:10.240 14:58:43 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.240 14:58:43 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:24:10.240 14:58:43 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.240 14:58:43 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:10.240 [2024-07-15 14:58:43.999099] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:10.240 14:58:44 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.240 14:58:44 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2955943 00:24:10.240 [2024-07-15 14:58:44.035337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:10.240 [2024-07-15 14:58:44.035363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:10.240 [2024-07-15 14:58:44.035543] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:10.240 [2024-07-15 14:58:44.035551] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:10.240 [2024-07-15 14:58:44.035559] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:24:10.240 [2024-07-15 14:58:44.038302] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:10.240 [2024-07-15 14:58:44.046715] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:10.240 [2024-07-15 14:58:44.093870] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:20.198 00:24:20.198 Latency(us) 00:24:20.198 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:20.198 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:20.198 Verification LBA range: start 0x0 length 0x4000 00:24:20.198 Nvme1n1 : 15.01 13063.74 51.03 10492.91 0.00 5413.38 450.56 1062557.01 00:24:20.198 =================================================================================================================== 00:24:20.198 Total : 13063.74 51.03 10492.91 0.00 5413.38 450.56 1062557.01 00:24:20.198 14:58:52 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:24:20.198 14:58:52 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:20.198 14:58:52 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.199 14:58:52 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:20.199 14:58:52 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.199 14:58:52 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:24:20.199 14:58:52 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:24:20.199 14:58:52 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:20.199 14:58:52 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:24:20.199 14:58:52 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:24:20.199 14:58:52 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:24:20.199 14:58:52 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:24:20.199 14:58:52 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:20.199 14:58:52 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:24:20.199 rmmod nvme_rdma 00:24:20.199 rmmod nvme_fabrics 00:24:20.199 14:58:52 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:20.199 14:58:52 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:24:20.199 14:58:52 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:24:20.199 14:58:52 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 2956950 ']' 00:24:20.199 14:58:52 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 2956950 00:24:20.199 14:58:52 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 2956950 ']' 00:24:20.199 14:58:52 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 2956950 00:24:20.199 14:58:52 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:24:20.199 14:58:52 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:20.199 14:58:52 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2956950 00:24:20.199 14:58:52 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:20.199 14:58:52 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:20.199 14:58:52 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2956950' 00:24:20.199 killing process with pid 2956950 00:24:20.199 14:58:52 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 2956950 00:24:20.199 14:58:52 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 2956950 00:24:20.199 14:58:52 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:20.199 14:58:52 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:24:20.199 00:24:20.199 real 0m24.093s 00:24:20.199 user 1m4.139s 00:24:20.199 sys 0m5.062s 00:24:20.199 14:58:52 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:20.199 14:58:52 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:20.199 ************************************ 00:24:20.199 END TEST nvmf_bdevperf 00:24:20.199 ************************************ 00:24:20.199 14:58:52 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:24:20.199 14:58:52 nvmf_rdma -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:24:20.199 14:58:52 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:20.199 14:58:52 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:20.199 14:58:52 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:24:20.199 ************************************ 00:24:20.199 START TEST nvmf_target_disconnect 00:24:20.199 ************************************ 00:24:20.199 14:58:52 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:24:20.199 * Looking for test storage... 00:24:20.199 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:24:20.199 14:58:53 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:20.199 14:58:53 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:24:20.199 14:58:53 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:20.199 14:58:53 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:20.199 14:58:53 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:20.199 14:58:53 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:20.199 14:58:53 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:20.199 14:58:53 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:20.199 14:58:53 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:20.199 14:58:53 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:20.199 14:58:53 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:20.199 14:58:53 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:20.199 14:58:53 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:24:20.199 14:58:53 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:24:20.199 14:58:53 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:20.199 14:58:53 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:20.199 14:58:53 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:20.199 14:58:53 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:20.199 14:58:53 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:20.199 14:58:53 nvmf_rdma.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:20.199 14:58:53 nvmf_rdma.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:20.199 14:58:53 nvmf_rdma.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:20.199 14:58:53 nvmf_rdma.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.199 14:58:53 nvmf_rdma.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.199 14:58:53 nvmf_rdma.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.199 14:58:53 nvmf_rdma.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:24:20.199 14:58:53 nvmf_rdma.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.199 14:58:53 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:24:20.199 14:58:53 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:20.199 14:58:53 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:20.199 14:58:53 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:20.199 14:58:53 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:20.199 14:58:53 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:20.199 14:58:53 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:20.199 14:58:53 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:20.199 14:58:53 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:20.199 14:58:53 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:24:20.199 14:58:53 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:24:20.199 14:58:53 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:24:20.199 14:58:53 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:24:20.199 14:58:53 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:24:20.199 14:58:53 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:20.199 14:58:53 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:20.199 14:58:53 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:20.199 14:58:53 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:20.199 14:58:53 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:20.199 14:58:53 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:20.199 14:58:53 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:20.199 14:58:53 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:20.199 14:58:53 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:20.199 14:58:53 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:24:20.199 14:58:53 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:24.374 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:24.374 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:24:24.374 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:24.374 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:24.374 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:24.374 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:24.374 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:24.374 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:24:24.374 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:24.374 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:24:24.374 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:24:24.374 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:24:24.374 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:24:24.374 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:24:24.374 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:24:24.374 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:24.374 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:24.374 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:24.374 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:24.374 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:24.374 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:24.374 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:24.374 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:24.374 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:24.374 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:24.374 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:24.374 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:24.374 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:24:24.374 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:24:24.374 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:24:24.374 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:24:24.374 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:24:24.374 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:24.374 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:24.374 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:24:24.374 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:24:24.374 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:24:24.374 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:24:24.374 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:24.374 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:24.374 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:24:24.374 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:24:24.374 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:24.374 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:24:24.374 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:24:24.374 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:24:24.374 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:24:24.374 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:24:24.375 Found net devices under 0000:da:00.0: mlx_0_0 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:24:24.375 Found net devices under 0000:da:00.1: mlx_0_1 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@420 -- # rdma_device_init 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@58 -- # uname 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@62 -- # modprobe ib_cm 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@63 -- # modprobe ib_core 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@64 -- # modprobe ib_umad 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@66 -- # modprobe iw_cm 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@502 -- # allocate_nic_ips 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@73 -- # get_rdma_if_list 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_0 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@105 -- # continue 2 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_1 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@105 -- # continue 2 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:24:24.375 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:24.375 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:24:24.375 altname enp218s0f0np0 00:24:24.375 altname ens818f0np0 00:24:24.375 inet 192.168.100.8/24 scope global mlx_0_0 00:24:24.375 valid_lft forever preferred_lft forever 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:24:24.375 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:24.375 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:24:24.375 altname enp218s0f1np1 00:24:24.375 altname ens818f1np1 00:24:24.375 inet 192.168.100.9/24 scope global mlx_0_1 00:24:24.375 valid_lft forever preferred_lft forever 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@86 -- # get_rdma_if_list 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_0 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@105 -- # continue 2 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_1 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@105 -- # continue 2 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:24:24.375 192.168.100.9' 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:24:24.375 192.168.100.9' 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@457 -- # head -n 1 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:24:24.375 192.168.100.9' 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@458 -- # tail -n +2 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@458 -- # head -n 1 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:24.375 ************************************ 00:24:24.375 START TEST nvmf_target_disconnect_tc1 00:24:24.375 ************************************ 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect ]] 00:24:24.375 14:58:57 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:24:24.375 EAL: No free 2048 kB hugepages reported on node 1 00:24:24.375 [2024-07-15 14:58:58.077734] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:24.375 [2024-07-15 14:58:58.077825] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:24.375 [2024-07-15 14:58:58.077837] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d7040 00:24:25.307 [2024-07-15 14:58:59.081926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:25.307 [2024-07-15 14:58:59.081983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:24:25.307 [2024-07-15 14:58:59.082007] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr is in error state 00:24:25.307 [2024-07-15 14:58:59.082057] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:25.307 [2024-07-15 14:58:59.082078] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:24:25.307 spdk_nvme_probe() failed for transport address '192.168.100.8' 00:24:25.307 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:24:25.307 Initializing NVMe Controllers 00:24:25.307 14:58:59 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:24:25.307 14:58:59 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:25.307 14:58:59 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:25.307 14:58:59 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:25.307 00:24:25.307 real 0m1.120s 00:24:25.307 user 0m0.936s 00:24:25.307 sys 0m0.170s 00:24:25.307 14:58:59 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:25.307 14:58:59 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:25.307 ************************************ 00:24:25.307 END TEST nvmf_target_disconnect_tc1 00:24:25.307 ************************************ 00:24:25.307 14:58:59 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:24:25.307 14:58:59 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:24:25.307 14:58:59 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:25.307 14:58:59 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:25.307 14:58:59 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:25.307 ************************************ 00:24:25.307 START TEST nvmf_target_disconnect_tc2 00:24:25.307 ************************************ 00:24:25.307 14:58:59 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:24:25.307 14:58:59 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 192.168.100.8 00:24:25.307 14:58:59 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:24:25.307 14:58:59 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:25.307 14:58:59 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:25.307 14:58:59 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:25.307 14:58:59 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2961801 00:24:25.307 14:58:59 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2961801 00:24:25.307 14:58:59 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:24:25.307 14:58:59 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2961801 ']' 00:24:25.307 14:58:59 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:25.307 14:58:59 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:25.307 14:58:59 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:25.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:25.307 14:58:59 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:25.307 14:58:59 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:25.307 [2024-07-15 14:58:59.218635] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:24:25.307 [2024-07-15 14:58:59.218689] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:25.563 EAL: No free 2048 kB hugepages reported on node 1 00:24:25.563 [2024-07-15 14:58:59.285283] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:25.563 [2024-07-15 14:58:59.356922] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:25.563 [2024-07-15 14:58:59.356961] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:25.563 [2024-07-15 14:58:59.356968] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:25.563 [2024-07-15 14:58:59.356973] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:25.563 [2024-07-15 14:58:59.356978] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:25.563 [2024-07-15 14:58:59.357627] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:24:25.563 [2024-07-15 14:58:59.357714] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:24:25.563 [2024-07-15 14:58:59.357797] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:24:25.563 [2024-07-15 14:58:59.357799] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:24:26.122 14:59:00 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:26.122 14:59:00 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:24:26.122 14:59:00 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:26.123 14:59:00 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:26.123 14:59:00 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:26.378 14:59:00 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:26.378 14:59:00 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:26.378 14:59:00 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.378 14:59:00 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:26.378 Malloc0 00:24:26.378 14:59:00 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.378 14:59:00 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:24:26.378 14:59:00 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.378 14:59:00 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:26.378 [2024-07-15 14:59:00.117233] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6e5cf0/0x6f18c0) succeed. 00:24:26.378 [2024-07-15 14:59:00.126510] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6e7330/0x732f50) succeed. 00:24:26.378 14:59:00 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.378 14:59:00 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:26.378 14:59:00 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.378 14:59:00 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:26.378 14:59:00 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.378 14:59:00 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:26.378 14:59:00 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.378 14:59:00 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:26.378 14:59:00 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.378 14:59:00 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:24:26.378 14:59:00 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.378 14:59:00 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:26.378 [2024-07-15 14:59:00.267359] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:26.378 14:59:00 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.378 14:59:00 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:24:26.378 14:59:00 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.378 14:59:00 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:26.378 14:59:00 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.378 14:59:00 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2962056 00:24:26.378 14:59:00 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:24:26.378 14:59:00 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:24:26.634 EAL: No free 2048 kB hugepages reported on node 1 00:24:28.660 14:59:02 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2961801 00:24:28.660 14:59:02 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:24:29.589 Write completed with error (sct=0, sc=8) 00:24:29.589 starting I/O failed 00:24:29.589 Read completed with error (sct=0, sc=8) 00:24:29.589 starting I/O failed 00:24:29.589 Read completed with error (sct=0, sc=8) 00:24:29.589 starting I/O failed 00:24:29.589 Read completed with error (sct=0, sc=8) 00:24:29.589 starting I/O failed 00:24:29.589 Read completed with error (sct=0, sc=8) 00:24:29.589 starting I/O failed 00:24:29.589 Read completed with error (sct=0, sc=8) 00:24:29.589 starting I/O failed 00:24:29.589 Read completed with error (sct=0, sc=8) 00:24:29.589 starting I/O failed 00:24:29.589 Read completed with error (sct=0, sc=8) 00:24:29.589 starting I/O failed 00:24:29.589 Read completed with error (sct=0, sc=8) 00:24:29.589 starting I/O failed 00:24:29.589 Read completed with error (sct=0, sc=8) 00:24:29.589 starting I/O failed 00:24:29.589 Write completed with error (sct=0, sc=8) 00:24:29.589 starting I/O failed 00:24:29.589 Read completed with error (sct=0, sc=8) 00:24:29.589 starting I/O failed 00:24:29.589 Write completed with error (sct=0, sc=8) 00:24:29.589 starting I/O failed 00:24:29.589 Read completed with error (sct=0, sc=8) 00:24:29.589 starting I/O failed 00:24:29.589 Write completed with error (sct=0, sc=8) 00:24:29.589 starting I/O failed 00:24:29.589 Write completed with error (sct=0, sc=8) 00:24:29.589 starting I/O failed 00:24:29.589 Write completed with error (sct=0, sc=8) 00:24:29.589 starting I/O failed 00:24:29.589 Read completed with error (sct=0, sc=8) 00:24:29.589 starting I/O failed 00:24:29.589 Write completed with error (sct=0, sc=8) 00:24:29.589 starting I/O failed 00:24:29.589 Read completed with error (sct=0, sc=8) 00:24:29.589 starting I/O failed 00:24:29.589 Write completed with error (sct=0, sc=8) 00:24:29.589 starting I/O failed 00:24:29.589 Write completed with error (sct=0, sc=8) 00:24:29.589 starting I/O failed 00:24:29.589 Write completed with error (sct=0, sc=8) 00:24:29.589 starting I/O failed 00:24:29.589 Read completed with error (sct=0, sc=8) 00:24:29.589 starting I/O failed 00:24:29.589 Write completed with error (sct=0, sc=8) 00:24:29.589 starting I/O failed 00:24:29.589 Read completed with error (sct=0, sc=8) 00:24:29.589 starting I/O failed 00:24:29.589 Write completed with error (sct=0, sc=8) 00:24:29.589 starting I/O failed 00:24:29.589 Read completed with error (sct=0, sc=8) 00:24:29.589 starting I/O failed 00:24:29.589 Write completed with error (sct=0, sc=8) 00:24:29.589 starting I/O failed 00:24:29.589 Write completed with error (sct=0, sc=8) 00:24:29.589 starting I/O failed 00:24:29.589 Write completed with error (sct=0, sc=8) 00:24:29.589 starting I/O failed 00:24:29.589 Read completed with error (sct=0, sc=8) 00:24:29.589 starting I/O failed 00:24:29.589 [2024-07-15 14:59:03.447730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:30.521 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2961801 Killed "${NVMF_APP[@]}" "$@" 00:24:30.522 14:59:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 192.168.100.8 00:24:30.522 14:59:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:24:30.522 14:59:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:30.522 14:59:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:30.522 14:59:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:30.522 14:59:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:24:30.522 14:59:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2962678 00:24:30.522 14:59:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2962678 00:24:30.522 14:59:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2962678 ']' 00:24:30.522 14:59:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:30.522 14:59:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:30.522 14:59:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:30.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:30.522 14:59:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:30.522 14:59:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:30.522 [2024-07-15 14:59:04.343248] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:24:30.522 [2024-07-15 14:59:04.343298] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:30.522 EAL: No free 2048 kB hugepages reported on node 1 00:24:30.522 [2024-07-15 14:59:04.409427] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:30.779 Write completed with error (sct=0, sc=8) 00:24:30.779 starting I/O failed 00:24:30.779 Read completed with error (sct=0, sc=8) 00:24:30.779 starting I/O failed 00:24:30.779 Read completed with error (sct=0, sc=8) 00:24:30.779 starting I/O failed 00:24:30.779 Read completed with error (sct=0, sc=8) 00:24:30.779 starting I/O failed 00:24:30.779 Read completed with error (sct=0, sc=8) 00:24:30.779 starting I/O failed 00:24:30.779 Read completed with error (sct=0, sc=8) 00:24:30.779 starting I/O failed 00:24:30.779 Read completed with error (sct=0, sc=8) 00:24:30.779 starting I/O failed 00:24:30.779 Read completed with error (sct=0, sc=8) 00:24:30.779 starting I/O failed 00:24:30.779 Read completed with error (sct=0, sc=8) 00:24:30.779 starting I/O failed 00:24:30.779 Read completed with error (sct=0, sc=8) 00:24:30.779 starting I/O failed 00:24:30.779 Write completed with error (sct=0, sc=8) 00:24:30.779 starting I/O failed 00:24:30.779 Read completed with error (sct=0, sc=8) 00:24:30.779 starting I/O failed 00:24:30.779 Write completed with error (sct=0, sc=8) 00:24:30.779 starting I/O failed 00:24:30.779 Write completed with error (sct=0, sc=8) 00:24:30.779 starting I/O failed 00:24:30.779 Write completed with error (sct=0, sc=8) 00:24:30.779 starting I/O failed 00:24:30.779 Read completed with error (sct=0, sc=8) 00:24:30.779 starting I/O failed 00:24:30.779 Write completed with error (sct=0, sc=8) 00:24:30.779 starting I/O failed 00:24:30.779 Read completed with error (sct=0, sc=8) 00:24:30.779 starting I/O failed 00:24:30.779 Write completed with error (sct=0, sc=8) 00:24:30.779 starting I/O failed 00:24:30.779 Read completed with error (sct=0, sc=8) 00:24:30.779 starting I/O failed 00:24:30.779 Write completed with error (sct=0, sc=8) 00:24:30.779 starting I/O failed 00:24:30.779 Read completed with error (sct=0, sc=8) 00:24:30.779 starting I/O failed 00:24:30.779 Write completed with error (sct=0, sc=8) 00:24:30.779 starting I/O failed 00:24:30.779 Read completed with error (sct=0, sc=8) 00:24:30.779 starting I/O failed 00:24:30.779 Write completed with error (sct=0, sc=8) 00:24:30.779 starting I/O failed 00:24:30.779 Write completed with error (sct=0, sc=8) 00:24:30.779 starting I/O failed 00:24:30.779 Write completed with error (sct=0, sc=8) 00:24:30.779 starting I/O failed 00:24:30.779 Read completed with error (sct=0, sc=8) 00:24:30.779 starting I/O failed 00:24:30.779 Read completed with error (sct=0, sc=8) 00:24:30.779 starting I/O failed 00:24:30.779 Read completed with error (sct=0, sc=8) 00:24:30.779 starting I/O failed 00:24:30.779 Write completed with error (sct=0, sc=8) 00:24:30.779 starting I/O failed 00:24:30.779 Write completed with error (sct=0, sc=8) 00:24:30.779 starting I/O failed 00:24:30.779 [2024-07-15 14:59:04.452674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:30.779 [2024-07-15 14:59:04.488365] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:30.779 [2024-07-15 14:59:04.488397] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:30.779 [2024-07-15 14:59:04.488403] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:30.779 [2024-07-15 14:59:04.488409] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:30.779 [2024-07-15 14:59:04.488414] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:30.779 [2024-07-15 14:59:04.488526] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:24:30.779 [2024-07-15 14:59:04.488640] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:24:30.779 [2024-07-15 14:59:04.488701] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:24:30.779 [2024-07-15 14:59:04.488702] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:24:31.343 14:59:05 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:31.343 14:59:05 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:24:31.343 14:59:05 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:31.343 14:59:05 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:31.343 14:59:05 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:31.343 14:59:05 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:31.343 14:59:05 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:31.343 14:59:05 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.343 14:59:05 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:31.343 Malloc0 00:24:31.343 14:59:05 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.343 14:59:05 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:24:31.343 14:59:05 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.343 14:59:05 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:31.343 [2024-07-15 14:59:05.236739] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1fbecf0/0x1fca8c0) succeed. 00:24:31.343 [2024-07-15 14:59:05.247446] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1fc0330/0x200bf50) succeed. 00:24:31.602 14:59:05 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.602 14:59:05 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:31.602 14:59:05 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.602 14:59:05 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:31.602 14:59:05 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.602 14:59:05 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:31.602 14:59:05 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.602 14:59:05 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:31.602 14:59:05 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.602 14:59:05 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:24:31.602 14:59:05 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.602 14:59:05 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:31.602 [2024-07-15 14:59:05.390679] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:31.602 14:59:05 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.602 14:59:05 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:24:31.602 14:59:05 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.602 14:59:05 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:31.602 14:59:05 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.602 14:59:05 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2962056 00:24:31.602 Write completed with error (sct=0, sc=8) 00:24:31.602 starting I/O failed 00:24:31.602 Read completed with error (sct=0, sc=8) 00:24:31.602 starting I/O failed 00:24:31.602 Write completed with error (sct=0, sc=8) 00:24:31.602 starting I/O failed 00:24:31.602 Write completed with error (sct=0, sc=8) 00:24:31.602 starting I/O failed 00:24:31.602 Read completed with error (sct=0, sc=8) 00:24:31.602 starting I/O failed 00:24:31.602 Write completed with error (sct=0, sc=8) 00:24:31.602 starting I/O failed 00:24:31.602 Write completed with error (sct=0, sc=8) 00:24:31.602 starting I/O failed 00:24:31.602 Write completed with error (sct=0, sc=8) 00:24:31.602 starting I/O failed 00:24:31.602 Read completed with error (sct=0, sc=8) 00:24:31.602 starting I/O failed 00:24:31.602 Read completed with error (sct=0, sc=8) 00:24:31.602 starting I/O failed 00:24:31.602 Read completed with error (sct=0, sc=8) 00:24:31.602 starting I/O failed 00:24:31.602 Write completed with error (sct=0, sc=8) 00:24:31.602 starting I/O failed 00:24:31.602 Read completed with error (sct=0, sc=8) 00:24:31.602 starting I/O failed 00:24:31.602 Read completed with error (sct=0, sc=8) 00:24:31.602 starting I/O failed 00:24:31.602 Read completed with error (sct=0, sc=8) 00:24:31.602 starting I/O failed 00:24:31.602 Write completed with error (sct=0, sc=8) 00:24:31.602 starting I/O failed 00:24:31.602 Write completed with error (sct=0, sc=8) 00:24:31.602 starting I/O failed 00:24:31.602 Read completed with error (sct=0, sc=8) 00:24:31.602 starting I/O failed 00:24:31.602 Read completed with error (sct=0, sc=8) 00:24:31.602 starting I/O failed 00:24:31.602 Write completed with error (sct=0, sc=8) 00:24:31.602 starting I/O failed 00:24:31.602 Read completed with error (sct=0, sc=8) 00:24:31.602 starting I/O failed 00:24:31.602 Read completed with error (sct=0, sc=8) 00:24:31.602 starting I/O failed 00:24:31.602 Read completed with error (sct=0, sc=8) 00:24:31.602 starting I/O failed 00:24:31.602 Write completed with error (sct=0, sc=8) 00:24:31.602 starting I/O failed 00:24:31.602 Write completed with error (sct=0, sc=8) 00:24:31.602 starting I/O failed 00:24:31.602 Write completed with error (sct=0, sc=8) 00:24:31.602 starting I/O failed 00:24:31.602 Read completed with error (sct=0, sc=8) 00:24:31.602 starting I/O failed 00:24:31.602 Write completed with error (sct=0, sc=8) 00:24:31.602 starting I/O failed 00:24:31.602 Read completed with error (sct=0, sc=8) 00:24:31.602 starting I/O failed 00:24:31.602 Write completed with error (sct=0, sc=8) 00:24:31.602 starting I/O failed 00:24:31.602 Read completed with error (sct=0, sc=8) 00:24:31.602 starting I/O failed 00:24:31.602 Write completed with error (sct=0, sc=8) 00:24:31.602 starting I/O failed 00:24:31.602 [2024-07-15 14:59:05.457880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:31.602 [2024-07-15 14:59:05.464601] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.603 [2024-07-15 14:59:05.464653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.603 [2024-07-15 14:59:05.464672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.603 [2024-07-15 14:59:05.464679] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.603 [2024-07-15 14:59:05.464688] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:31.603 [2024-07-15 14:59:05.474783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:31.603 qpair failed and we were unable to recover it. 00:24:31.603 [2024-07-15 14:59:05.484716] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.603 [2024-07-15 14:59:05.484763] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.603 [2024-07-15 14:59:05.484779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.603 [2024-07-15 14:59:05.484787] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.603 [2024-07-15 14:59:05.484794] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:31.603 [2024-07-15 14:59:05.495097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:31.603 qpair failed and we were unable to recover it. 00:24:31.603 [2024-07-15 14:59:05.504735] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.603 [2024-07-15 14:59:05.504769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.603 [2024-07-15 14:59:05.504785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.603 [2024-07-15 14:59:05.504791] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.603 [2024-07-15 14:59:05.504797] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:31.603 [2024-07-15 14:59:05.515188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:31.603 qpair failed and we were unable to recover it. 00:24:31.861 [2024-07-15 14:59:05.524781] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.861 [2024-07-15 14:59:05.524826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.861 [2024-07-15 14:59:05.524841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.861 [2024-07-15 14:59:05.524848] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.861 [2024-07-15 14:59:05.524854] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:31.861 [2024-07-15 14:59:05.535126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:31.861 qpair failed and we were unable to recover it. 00:24:31.861 [2024-07-15 14:59:05.544875] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.861 [2024-07-15 14:59:05.544920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.861 [2024-07-15 14:59:05.544935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.861 [2024-07-15 14:59:05.544942] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.861 [2024-07-15 14:59:05.544947] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:31.861 [2024-07-15 14:59:05.555195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:31.861 qpair failed and we were unable to recover it. 00:24:31.861 [2024-07-15 14:59:05.564773] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.861 [2024-07-15 14:59:05.564813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.861 [2024-07-15 14:59:05.564828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.861 [2024-07-15 14:59:05.564834] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.861 [2024-07-15 14:59:05.564840] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:31.861 [2024-07-15 14:59:05.575136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:31.861 qpair failed and we were unable to recover it. 00:24:31.861 [2024-07-15 14:59:05.584976] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.861 [2024-07-15 14:59:05.585015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.861 [2024-07-15 14:59:05.585029] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.861 [2024-07-15 14:59:05.585036] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.861 [2024-07-15 14:59:05.585042] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:31.861 [2024-07-15 14:59:05.595249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:31.861 qpair failed and we were unable to recover it. 00:24:31.861 [2024-07-15 14:59:05.605133] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.861 [2024-07-15 14:59:05.605173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.861 [2024-07-15 14:59:05.605187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.861 [2024-07-15 14:59:05.605194] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.861 [2024-07-15 14:59:05.605200] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:31.861 [2024-07-15 14:59:05.615506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:31.861 qpair failed and we were unable to recover it. 00:24:31.861 [2024-07-15 14:59:05.624940] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.861 [2024-07-15 14:59:05.624981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.861 [2024-07-15 14:59:05.624996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.861 [2024-07-15 14:59:05.625003] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.861 [2024-07-15 14:59:05.625008] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:31.861 [2024-07-15 14:59:05.635336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:31.861 qpair failed and we were unable to recover it. 00:24:31.861 [2024-07-15 14:59:05.645110] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.861 [2024-07-15 14:59:05.645150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.861 [2024-07-15 14:59:05.645164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.861 [2024-07-15 14:59:05.645174] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.861 [2024-07-15 14:59:05.645179] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:31.861 [2024-07-15 14:59:05.655483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:31.861 qpair failed and we were unable to recover it. 00:24:31.861 [2024-07-15 14:59:05.665090] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.861 [2024-07-15 14:59:05.665130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.861 [2024-07-15 14:59:05.665145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.861 [2024-07-15 14:59:05.665152] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.861 [2024-07-15 14:59:05.665158] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:31.861 [2024-07-15 14:59:05.675623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:31.861 qpair failed and we were unable to recover it. 00:24:31.861 [2024-07-15 14:59:05.685140] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.861 [2024-07-15 14:59:05.685177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.861 [2024-07-15 14:59:05.685191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.861 [2024-07-15 14:59:05.685198] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.861 [2024-07-15 14:59:05.685204] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:31.861 [2024-07-15 14:59:05.695666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:31.861 qpair failed and we were unable to recover it. 00:24:31.861 [2024-07-15 14:59:05.705217] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.861 [2024-07-15 14:59:05.705252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.861 [2024-07-15 14:59:05.705267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.861 [2024-07-15 14:59:05.705273] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.861 [2024-07-15 14:59:05.705279] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:31.861 [2024-07-15 14:59:05.715719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:31.861 qpair failed and we were unable to recover it. 00:24:31.861 [2024-07-15 14:59:05.725323] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.861 [2024-07-15 14:59:05.725363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.861 [2024-07-15 14:59:05.725378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.861 [2024-07-15 14:59:05.725384] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.861 [2024-07-15 14:59:05.725389] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:31.861 [2024-07-15 14:59:05.735428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:31.861 qpair failed and we were unable to recover it. 00:24:31.861 [2024-07-15 14:59:05.745377] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.862 [2024-07-15 14:59:05.745409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.862 [2024-07-15 14:59:05.745423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.862 [2024-07-15 14:59:05.745430] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.862 [2024-07-15 14:59:05.745435] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:31.862 [2024-07-15 14:59:05.755908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:31.862 qpair failed and we were unable to recover it. 00:24:31.862 [2024-07-15 14:59:05.765361] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:31.862 [2024-07-15 14:59:05.765400] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:31.862 [2024-07-15 14:59:05.765414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:31.862 [2024-07-15 14:59:05.765420] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:31.862 [2024-07-15 14:59:05.765426] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:31.862 [2024-07-15 14:59:05.775774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:31.862 qpair failed and we were unable to recover it. 00:24:32.119 [2024-07-15 14:59:05.786631] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.119 [2024-07-15 14:59:05.786673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.119 [2024-07-15 14:59:05.786688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.119 [2024-07-15 14:59:05.786694] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.119 [2024-07-15 14:59:05.786700] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:32.119 [2024-07-15 14:59:05.796095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.119 qpair failed and we were unable to recover it. 00:24:32.119 [2024-07-15 14:59:05.805444] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.119 [2024-07-15 14:59:05.805481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.119 [2024-07-15 14:59:05.805495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.119 [2024-07-15 14:59:05.805501] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.119 [2024-07-15 14:59:05.805507] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:32.119 [2024-07-15 14:59:05.816137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.119 qpair failed and we were unable to recover it. 00:24:32.119 [2024-07-15 14:59:05.825524] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.119 [2024-07-15 14:59:05.825567] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.119 [2024-07-15 14:59:05.825585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.119 [2024-07-15 14:59:05.825591] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.119 [2024-07-15 14:59:05.825597] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:32.119 [2024-07-15 14:59:05.835996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.119 qpair failed and we were unable to recover it. 00:24:32.119 [2024-07-15 14:59:05.845869] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.119 [2024-07-15 14:59:05.845906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.119 [2024-07-15 14:59:05.845920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.119 [2024-07-15 14:59:05.845927] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.119 [2024-07-15 14:59:05.845932] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:32.119 [2024-07-15 14:59:05.856119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.119 qpair failed and we were unable to recover it. 00:24:32.119 [2024-07-15 14:59:05.865729] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.119 [2024-07-15 14:59:05.865774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.119 [2024-07-15 14:59:05.865789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.119 [2024-07-15 14:59:05.865795] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.120 [2024-07-15 14:59:05.865801] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:32.120 [2024-07-15 14:59:05.876291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.120 qpair failed and we were unable to recover it. 00:24:32.120 [2024-07-15 14:59:05.885869] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.120 [2024-07-15 14:59:05.885907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.120 [2024-07-15 14:59:05.885921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.120 [2024-07-15 14:59:05.885928] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.120 [2024-07-15 14:59:05.885933] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:32.120 [2024-07-15 14:59:05.896000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.120 qpair failed and we were unable to recover it. 00:24:32.120 [2024-07-15 14:59:05.905834] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.120 [2024-07-15 14:59:05.905867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.120 [2024-07-15 14:59:05.905882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.120 [2024-07-15 14:59:05.905888] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.120 [2024-07-15 14:59:05.905897] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:32.120 [2024-07-15 14:59:05.916364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.120 qpair failed and we were unable to recover it. 00:24:32.120 [2024-07-15 14:59:05.925898] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.120 [2024-07-15 14:59:05.925936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.120 [2024-07-15 14:59:05.925951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.120 [2024-07-15 14:59:05.925957] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.120 [2024-07-15 14:59:05.925962] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:32.120 [2024-07-15 14:59:05.936239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.120 qpair failed and we were unable to recover it. 00:24:32.120 [2024-07-15 14:59:05.945977] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.120 [2024-07-15 14:59:05.946016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.120 [2024-07-15 14:59:05.946031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.120 [2024-07-15 14:59:05.946037] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.120 [2024-07-15 14:59:05.946043] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:32.120 [2024-07-15 14:59:05.956226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.120 qpair failed and we were unable to recover it. 00:24:32.120 [2024-07-15 14:59:05.966029] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.120 [2024-07-15 14:59:05.966067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.120 [2024-07-15 14:59:05.966082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.120 [2024-07-15 14:59:05.966089] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.120 [2024-07-15 14:59:05.966094] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:32.120 [2024-07-15 14:59:05.976350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.120 qpair failed and we were unable to recover it. 00:24:32.120 [2024-07-15 14:59:05.986131] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.120 [2024-07-15 14:59:05.986164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.120 [2024-07-15 14:59:05.986179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.120 [2024-07-15 14:59:05.986185] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.120 [2024-07-15 14:59:05.986191] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:32.120 [2024-07-15 14:59:05.996535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.120 qpair failed and we were unable to recover it. 00:24:32.120 [2024-07-15 14:59:06.006078] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.120 [2024-07-15 14:59:06.006115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.120 [2024-07-15 14:59:06.006129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.120 [2024-07-15 14:59:06.006136] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.120 [2024-07-15 14:59:06.006141] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:32.120 [2024-07-15 14:59:06.016206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.120 qpair failed and we were unable to recover it. 00:24:32.120 [2024-07-15 14:59:06.026282] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.120 [2024-07-15 14:59:06.026322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.120 [2024-07-15 14:59:06.026336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.120 [2024-07-15 14:59:06.026342] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.120 [2024-07-15 14:59:06.026348] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:32.120 [2024-07-15 14:59:06.036499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.120 qpair failed and we were unable to recover it. 00:24:32.390 [2024-07-15 14:59:06.046257] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.390 [2024-07-15 14:59:06.046311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.390 [2024-07-15 14:59:06.046326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.390 [2024-07-15 14:59:06.046332] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.391 [2024-07-15 14:59:06.046338] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:32.391 [2024-07-15 14:59:06.056563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.391 qpair failed and we were unable to recover it. 00:24:32.391 [2024-07-15 14:59:06.066341] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.391 [2024-07-15 14:59:06.066387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.391 [2024-07-15 14:59:06.066401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.391 [2024-07-15 14:59:06.066408] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.391 [2024-07-15 14:59:06.066413] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:32.391 [2024-07-15 14:59:06.076677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.391 qpair failed and we were unable to recover it. 00:24:32.391 [2024-07-15 14:59:06.086303] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.391 [2024-07-15 14:59:06.086345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.391 [2024-07-15 14:59:06.086359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.391 [2024-07-15 14:59:06.086369] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.391 [2024-07-15 14:59:06.086375] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:32.391 [2024-07-15 14:59:06.096705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.391 qpair failed and we were unable to recover it. 00:24:32.391 [2024-07-15 14:59:06.106476] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.391 [2024-07-15 14:59:06.106515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.391 [2024-07-15 14:59:06.106529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.391 [2024-07-15 14:59:06.106535] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.391 [2024-07-15 14:59:06.106547] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:32.391 [2024-07-15 14:59:06.116663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.391 qpair failed and we were unable to recover it. 00:24:32.391 [2024-07-15 14:59:06.126509] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.391 [2024-07-15 14:59:06.126547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.391 [2024-07-15 14:59:06.126561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.391 [2024-07-15 14:59:06.126568] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.391 [2024-07-15 14:59:06.126573] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:32.391 [2024-07-15 14:59:06.136894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.391 qpair failed and we were unable to recover it. 00:24:32.391 [2024-07-15 14:59:06.146705] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.391 [2024-07-15 14:59:06.146740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.391 [2024-07-15 14:59:06.146754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.391 [2024-07-15 14:59:06.146761] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.391 [2024-07-15 14:59:06.146766] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:32.391 [2024-07-15 14:59:06.156969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.391 qpair failed and we were unable to recover it. 00:24:32.391 [2024-07-15 14:59:06.166678] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.391 [2024-07-15 14:59:06.166713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.391 [2024-07-15 14:59:06.166728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.391 [2024-07-15 14:59:06.166734] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.391 [2024-07-15 14:59:06.166740] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:32.391 [2024-07-15 14:59:06.176881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.391 qpair failed and we were unable to recover it. 00:24:32.391 [2024-07-15 14:59:06.186614] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.391 [2024-07-15 14:59:06.186664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.391 [2024-07-15 14:59:06.186679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.391 [2024-07-15 14:59:06.186685] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.391 [2024-07-15 14:59:06.186691] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:32.391 [2024-07-15 14:59:06.197003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.391 qpair failed and we were unable to recover it. 00:24:32.391 [2024-07-15 14:59:06.206749] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.391 [2024-07-15 14:59:06.206788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.391 [2024-07-15 14:59:06.206802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.391 [2024-07-15 14:59:06.206809] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.391 [2024-07-15 14:59:06.206815] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:32.391 [2024-07-15 14:59:06.217065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.391 qpair failed and we were unable to recover it. 00:24:32.391 [2024-07-15 14:59:06.226907] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.391 [2024-07-15 14:59:06.226946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.391 [2024-07-15 14:59:06.226960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.391 [2024-07-15 14:59:06.226967] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.391 [2024-07-15 14:59:06.226973] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:32.391 [2024-07-15 14:59:06.237200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.391 qpair failed and we were unable to recover it. 00:24:32.391 [2024-07-15 14:59:06.246938] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.391 [2024-07-15 14:59:06.246974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.391 [2024-07-15 14:59:06.246989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.391 [2024-07-15 14:59:06.246995] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.391 [2024-07-15 14:59:06.247001] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:32.391 [2024-07-15 14:59:06.257224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.391 qpair failed and we were unable to recover it. 00:24:32.391 [2024-07-15 14:59:06.267000] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.391 [2024-07-15 14:59:06.267036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.391 [2024-07-15 14:59:06.267053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.391 [2024-07-15 14:59:06.267060] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.391 [2024-07-15 14:59:06.267065] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:32.391 [2024-07-15 14:59:06.277290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.391 qpair failed and we were unable to recover it. 00:24:32.391 [2024-07-15 14:59:06.287064] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.391 [2024-07-15 14:59:06.287098] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.391 [2024-07-15 14:59:06.287112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.391 [2024-07-15 14:59:06.287118] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.391 [2024-07-15 14:59:06.287124] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:32.391 [2024-07-15 14:59:06.297313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.392 qpair failed and we were unable to recover it. 00:24:32.687 [2024-07-15 14:59:06.307045] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.687 [2024-07-15 14:59:06.307098] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.687 [2024-07-15 14:59:06.307112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.687 [2024-07-15 14:59:06.307119] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.687 [2024-07-15 14:59:06.307125] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:32.687 [2024-07-15 14:59:06.317424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.687 qpair failed and we were unable to recover it. 00:24:32.687 [2024-07-15 14:59:06.327097] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.687 [2024-07-15 14:59:06.327134] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.687 [2024-07-15 14:59:06.327164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.687 [2024-07-15 14:59:06.327172] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.687 [2024-07-15 14:59:06.327178] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:32.687 [2024-07-15 14:59:06.337440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.687 qpair failed and we were unable to recover it. 00:24:32.687 [2024-07-15 14:59:06.347170] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.687 [2024-07-15 14:59:06.347207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.688 [2024-07-15 14:59:06.347222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.688 [2024-07-15 14:59:06.347228] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.688 [2024-07-15 14:59:06.347237] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:32.688 [2024-07-15 14:59:06.357561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.688 qpair failed and we were unable to recover it. 00:24:32.688 [2024-07-15 14:59:06.367202] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.688 [2024-07-15 14:59:06.367242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.688 [2024-07-15 14:59:06.367256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.688 [2024-07-15 14:59:06.367263] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.688 [2024-07-15 14:59:06.367268] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:32.688 [2024-07-15 14:59:06.377503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.688 qpair failed and we were unable to recover it. 00:24:32.688 [2024-07-15 14:59:06.387239] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.688 [2024-07-15 14:59:06.387274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.688 [2024-07-15 14:59:06.387288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.688 [2024-07-15 14:59:06.387295] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.688 [2024-07-15 14:59:06.387301] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:32.688 [2024-07-15 14:59:06.397947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.688 qpair failed and we were unable to recover it. 00:24:32.688 [2024-07-15 14:59:06.407340] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.688 [2024-07-15 14:59:06.407378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.688 [2024-07-15 14:59:06.407392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.688 [2024-07-15 14:59:06.407399] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.688 [2024-07-15 14:59:06.407404] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:32.688 [2024-07-15 14:59:06.417603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.688 qpair failed and we were unable to recover it. 00:24:32.688 [2024-07-15 14:59:06.427497] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.688 [2024-07-15 14:59:06.427534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.688 [2024-07-15 14:59:06.427553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.688 [2024-07-15 14:59:06.427560] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.688 [2024-07-15 14:59:06.427566] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:32.688 [2024-07-15 14:59:06.437687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.688 qpair failed and we were unable to recover it. 00:24:32.688 [2024-07-15 14:59:06.447462] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.688 [2024-07-15 14:59:06.447492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.688 [2024-07-15 14:59:06.447506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.688 [2024-07-15 14:59:06.447513] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.688 [2024-07-15 14:59:06.447518] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:32.688 [2024-07-15 14:59:06.457752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.688 qpair failed and we were unable to recover it. 00:24:32.688 [2024-07-15 14:59:06.467422] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.688 [2024-07-15 14:59:06.467460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.688 [2024-07-15 14:59:06.467474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.688 [2024-07-15 14:59:06.467481] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.688 [2024-07-15 14:59:06.467487] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:32.688 [2024-07-15 14:59:06.477963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.688 qpair failed and we were unable to recover it. 00:24:32.688 [2024-07-15 14:59:06.487514] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.688 [2024-07-15 14:59:06.487556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.688 [2024-07-15 14:59:06.487571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.688 [2024-07-15 14:59:06.487577] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.688 [2024-07-15 14:59:06.487583] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:32.688 [2024-07-15 14:59:06.497976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.688 qpair failed and we were unable to recover it. 00:24:32.688 [2024-07-15 14:59:06.507634] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.688 [2024-07-15 14:59:06.507672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.688 [2024-07-15 14:59:06.507688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.688 [2024-07-15 14:59:06.507694] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.688 [2024-07-15 14:59:06.507699] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:32.688 [2024-07-15 14:59:06.518129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.688 qpair failed and we were unable to recover it. 00:24:32.688 [2024-07-15 14:59:06.527663] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.688 [2024-07-15 14:59:06.527699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.688 [2024-07-15 14:59:06.527717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.688 [2024-07-15 14:59:06.527724] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.688 [2024-07-15 14:59:06.527730] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:32.688 [2024-07-15 14:59:06.538158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.688 qpair failed and we were unable to recover it. 00:24:32.688 [2024-07-15 14:59:06.547679] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.688 [2024-07-15 14:59:06.547712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.688 [2024-07-15 14:59:06.547726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.688 [2024-07-15 14:59:06.547732] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.688 [2024-07-15 14:59:06.547738] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:32.688 [2024-07-15 14:59:06.558170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.688 qpair failed and we were unable to recover it. 00:24:32.688 [2024-07-15 14:59:06.567818] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.688 [2024-07-15 14:59:06.567855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.688 [2024-07-15 14:59:06.567869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.688 [2024-07-15 14:59:06.567876] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.688 [2024-07-15 14:59:06.567882] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:32.688 [2024-07-15 14:59:06.578228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.688 qpair failed and we were unable to recover it. 00:24:32.688 [2024-07-15 14:59:06.587928] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.688 [2024-07-15 14:59:06.587984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.688 [2024-07-15 14:59:06.587999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.688 [2024-07-15 14:59:06.588006] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.688 [2024-07-15 14:59:06.588012] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:32.956 [2024-07-15 14:59:06.598251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.956 qpair failed and we were unable to recover it. 00:24:32.956 [2024-07-15 14:59:06.607938] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.956 [2024-07-15 14:59:06.607975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.956 [2024-07-15 14:59:06.607989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.956 [2024-07-15 14:59:06.607995] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.956 [2024-07-15 14:59:06.608001] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:32.956 [2024-07-15 14:59:06.618237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.956 qpair failed and we were unable to recover it. 00:24:32.956 [2024-07-15 14:59:06.628033] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.956 [2024-07-15 14:59:06.628071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.956 [2024-07-15 14:59:06.628086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.956 [2024-07-15 14:59:06.628092] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.956 [2024-07-15 14:59:06.628098] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:32.956 [2024-07-15 14:59:06.638347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.956 qpair failed and we were unable to recover it. 00:24:32.956 [2024-07-15 14:59:06.648069] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.956 [2024-07-15 14:59:06.648104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.956 [2024-07-15 14:59:06.648118] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.956 [2024-07-15 14:59:06.648125] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.956 [2024-07-15 14:59:06.648130] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:32.956 [2024-07-15 14:59:06.658484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.956 qpair failed and we were unable to recover it. 00:24:32.956 [2024-07-15 14:59:06.668029] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.956 [2024-07-15 14:59:06.668067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.956 [2024-07-15 14:59:06.668081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.956 [2024-07-15 14:59:06.668088] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.956 [2024-07-15 14:59:06.668094] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:32.956 [2024-07-15 14:59:06.678703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.956 qpair failed and we were unable to recover it. 00:24:32.956 [2024-07-15 14:59:06.688046] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.956 [2024-07-15 14:59:06.688080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.956 [2024-07-15 14:59:06.688095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.956 [2024-07-15 14:59:06.688101] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.956 [2024-07-15 14:59:06.688106] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:32.956 [2024-07-15 14:59:06.698489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.956 qpair failed and we were unable to recover it. 00:24:32.956 [2024-07-15 14:59:06.708150] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.956 [2024-07-15 14:59:06.708183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.956 [2024-07-15 14:59:06.708200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.956 [2024-07-15 14:59:06.708206] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.956 [2024-07-15 14:59:06.708212] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:32.956 [2024-07-15 14:59:06.718653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.956 qpair failed and we were unable to recover it. 00:24:32.956 [2024-07-15 14:59:06.728271] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.956 [2024-07-15 14:59:06.728308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.956 [2024-07-15 14:59:06.728322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.956 [2024-07-15 14:59:06.728328] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.956 [2024-07-15 14:59:06.728334] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:32.956 [2024-07-15 14:59:06.738623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.956 qpair failed and we were unable to recover it. 00:24:32.956 [2024-07-15 14:59:06.748371] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.956 [2024-07-15 14:59:06.748410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.956 [2024-07-15 14:59:06.748425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.956 [2024-07-15 14:59:06.748431] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.956 [2024-07-15 14:59:06.748437] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:32.956 [2024-07-15 14:59:06.758747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.956 qpair failed and we were unable to recover it. 00:24:32.956 [2024-07-15 14:59:06.768306] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.956 [2024-07-15 14:59:06.768346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.956 [2024-07-15 14:59:06.768360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.956 [2024-07-15 14:59:06.768366] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.956 [2024-07-15 14:59:06.768372] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:32.956 [2024-07-15 14:59:06.778824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.956 qpair failed and we were unable to recover it. 00:24:32.956 [2024-07-15 14:59:06.788453] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.956 [2024-07-15 14:59:06.788485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.956 [2024-07-15 14:59:06.788500] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.956 [2024-07-15 14:59:06.788506] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.956 [2024-07-15 14:59:06.788514] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:32.956 [2024-07-15 14:59:06.798897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.956 qpair failed and we were unable to recover it. 00:24:32.956 [2024-07-15 14:59:06.808514] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.956 [2024-07-15 14:59:06.808553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.956 [2024-07-15 14:59:06.808567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.956 [2024-07-15 14:59:06.808574] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.956 [2024-07-15 14:59:06.808579] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:32.956 [2024-07-15 14:59:06.818922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.956 qpair failed and we were unable to recover it. 00:24:32.956 [2024-07-15 14:59:06.828590] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.956 [2024-07-15 14:59:06.828630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.956 [2024-07-15 14:59:06.828644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.956 [2024-07-15 14:59:06.828650] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.956 [2024-07-15 14:59:06.828656] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:32.956 [2024-07-15 14:59:06.839003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.956 qpair failed and we were unable to recover it. 00:24:32.956 [2024-07-15 14:59:06.848562] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.956 [2024-07-15 14:59:06.848604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.956 [2024-07-15 14:59:06.848618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.956 [2024-07-15 14:59:06.848624] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.957 [2024-07-15 14:59:06.848630] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:32.957 [2024-07-15 14:59:06.859172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.957 qpair failed and we were unable to recover it. 00:24:32.957 [2024-07-15 14:59:06.868659] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:32.957 [2024-07-15 14:59:06.868693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:32.957 [2024-07-15 14:59:06.868707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:32.957 [2024-07-15 14:59:06.868713] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:32.957 [2024-07-15 14:59:06.868720] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:33.214 [2024-07-15 14:59:06.879199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.214 qpair failed and we were unable to recover it. 00:24:33.214 [2024-07-15 14:59:06.888614] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.214 [2024-07-15 14:59:06.888656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.214 [2024-07-15 14:59:06.888670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.214 [2024-07-15 14:59:06.888677] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.214 [2024-07-15 14:59:06.888682] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:33.214 [2024-07-15 14:59:06.899162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.214 qpair failed and we were unable to recover it. 00:24:33.214 [2024-07-15 14:59:06.908858] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.214 [2024-07-15 14:59:06.908897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.214 [2024-07-15 14:59:06.908912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.215 [2024-07-15 14:59:06.908918] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.215 [2024-07-15 14:59:06.908924] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:33.215 [2024-07-15 14:59:06.919202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.215 qpair failed and we were unable to recover it. 00:24:33.215 [2024-07-15 14:59:06.928949] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.215 [2024-07-15 14:59:06.928986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.215 [2024-07-15 14:59:06.929001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.215 [2024-07-15 14:59:06.929007] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.215 [2024-07-15 14:59:06.929014] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:33.215 [2024-07-15 14:59:06.939150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.215 qpair failed and we were unable to recover it. 00:24:33.215 [2024-07-15 14:59:06.948950] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.215 [2024-07-15 14:59:06.948987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.215 [2024-07-15 14:59:06.949001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.215 [2024-07-15 14:59:06.949008] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.215 [2024-07-15 14:59:06.949013] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:33.215 [2024-07-15 14:59:06.959442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.215 qpair failed and we were unable to recover it. 00:24:33.215 [2024-07-15 14:59:06.968978] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.215 [2024-07-15 14:59:06.969017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.215 [2024-07-15 14:59:06.969034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.215 [2024-07-15 14:59:06.969041] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.215 [2024-07-15 14:59:06.969047] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:33.215 [2024-07-15 14:59:06.979344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.215 qpair failed and we were unable to recover it. 00:24:33.215 [2024-07-15 14:59:06.989105] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.215 [2024-07-15 14:59:06.989148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.215 [2024-07-15 14:59:06.989164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.215 [2024-07-15 14:59:06.989170] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.215 [2024-07-15 14:59:06.989176] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:33.215 [2024-07-15 14:59:06.999531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.215 qpair failed and we were unable to recover it. 00:24:33.215 [2024-07-15 14:59:07.009182] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.215 [2024-07-15 14:59:07.009217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.215 [2024-07-15 14:59:07.009231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.215 [2024-07-15 14:59:07.009238] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.215 [2024-07-15 14:59:07.009243] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:33.215 [2024-07-15 14:59:07.019421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.215 qpair failed and we were unable to recover it. 00:24:33.215 [2024-07-15 14:59:07.029083] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.215 [2024-07-15 14:59:07.029121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.215 [2024-07-15 14:59:07.029135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.215 [2024-07-15 14:59:07.029142] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.215 [2024-07-15 14:59:07.029147] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:33.215 [2024-07-15 14:59:07.039932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.215 qpair failed and we were unable to recover it. 00:24:33.215 [2024-07-15 14:59:07.049193] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.215 [2024-07-15 14:59:07.049231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.215 [2024-07-15 14:59:07.049245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.215 [2024-07-15 14:59:07.049252] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.215 [2024-07-15 14:59:07.049257] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:33.215 [2024-07-15 14:59:07.059492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.215 qpair failed and we were unable to recover it. 00:24:33.215 [2024-07-15 14:59:07.069293] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.215 [2024-07-15 14:59:07.069337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.215 [2024-07-15 14:59:07.069352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.215 [2024-07-15 14:59:07.069359] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.215 [2024-07-15 14:59:07.069364] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:33.215 [2024-07-15 14:59:07.079654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.215 qpair failed and we were unable to recover it. 00:24:33.215 [2024-07-15 14:59:07.089357] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.215 [2024-07-15 14:59:07.089397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.215 [2024-07-15 14:59:07.089411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.215 [2024-07-15 14:59:07.089417] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.215 [2024-07-15 14:59:07.089423] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:33.215 [2024-07-15 14:59:07.099680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.215 qpair failed and we were unable to recover it. 00:24:33.215 [2024-07-15 14:59:07.109321] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.215 [2024-07-15 14:59:07.109358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.215 [2024-07-15 14:59:07.109371] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.215 [2024-07-15 14:59:07.109378] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.215 [2024-07-15 14:59:07.109383] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:33.215 [2024-07-15 14:59:07.119860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.215 qpair failed and we were unable to recover it. 00:24:33.215 [2024-07-15 14:59:07.129494] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.215 [2024-07-15 14:59:07.129529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.215 [2024-07-15 14:59:07.129548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.215 [2024-07-15 14:59:07.129555] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.215 [2024-07-15 14:59:07.129561] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:33.473 [2024-07-15 14:59:07.139851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.473 qpair failed and we were unable to recover it. 00:24:33.473 [2024-07-15 14:59:07.149355] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.473 [2024-07-15 14:59:07.149397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.473 [2024-07-15 14:59:07.149414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.473 [2024-07-15 14:59:07.149421] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.473 [2024-07-15 14:59:07.149427] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:33.473 [2024-07-15 14:59:07.160015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.473 qpair failed and we were unable to recover it. 00:24:33.473 [2024-07-15 14:59:07.169623] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.473 [2024-07-15 14:59:07.169659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.473 [2024-07-15 14:59:07.169673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.473 [2024-07-15 14:59:07.169680] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.473 [2024-07-15 14:59:07.169686] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:33.473 [2024-07-15 14:59:07.180056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.473 qpair failed and we were unable to recover it. 00:24:33.473 [2024-07-15 14:59:07.189499] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.473 [2024-07-15 14:59:07.189544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.473 [2024-07-15 14:59:07.189558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.473 [2024-07-15 14:59:07.189565] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.473 [2024-07-15 14:59:07.189571] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:33.473 [2024-07-15 14:59:07.200049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.473 qpair failed and we were unable to recover it. 00:24:33.473 [2024-07-15 14:59:07.209652] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.473 [2024-07-15 14:59:07.209693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.473 [2024-07-15 14:59:07.209707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.473 [2024-07-15 14:59:07.209714] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.473 [2024-07-15 14:59:07.209720] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:33.473 [2024-07-15 14:59:07.220000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.473 qpair failed and we were unable to recover it. 00:24:33.473 [2024-07-15 14:59:07.229743] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.473 [2024-07-15 14:59:07.229781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.473 [2024-07-15 14:59:07.229795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.473 [2024-07-15 14:59:07.229802] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.473 [2024-07-15 14:59:07.229810] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:33.473 [2024-07-15 14:59:07.240185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.473 qpair failed and we were unable to recover it. 00:24:33.473 [2024-07-15 14:59:07.249729] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.473 [2024-07-15 14:59:07.249771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.473 [2024-07-15 14:59:07.249785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.473 [2024-07-15 14:59:07.249792] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.473 [2024-07-15 14:59:07.249798] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:33.473 [2024-07-15 14:59:07.260206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.473 qpair failed and we were unable to recover it. 00:24:33.473 [2024-07-15 14:59:07.269839] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.473 [2024-07-15 14:59:07.269879] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.473 [2024-07-15 14:59:07.269893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.473 [2024-07-15 14:59:07.269899] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.473 [2024-07-15 14:59:07.269905] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:33.473 [2024-07-15 14:59:07.280088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.473 qpair failed and we were unable to recover it. 00:24:33.473 [2024-07-15 14:59:07.289957] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.473 [2024-07-15 14:59:07.289995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.473 [2024-07-15 14:59:07.290010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.473 [2024-07-15 14:59:07.290017] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.473 [2024-07-15 14:59:07.290022] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:33.473 [2024-07-15 14:59:07.300429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.473 qpair failed and we were unable to recover it. 00:24:33.473 [2024-07-15 14:59:07.310005] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.473 [2024-07-15 14:59:07.310042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.473 [2024-07-15 14:59:07.310057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.473 [2024-07-15 14:59:07.310063] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.473 [2024-07-15 14:59:07.310069] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:33.473 [2024-07-15 14:59:07.320440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.473 qpair failed and we were unable to recover it. 00:24:33.473 [2024-07-15 14:59:07.329976] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.473 [2024-07-15 14:59:07.330015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.473 [2024-07-15 14:59:07.330029] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.473 [2024-07-15 14:59:07.330035] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.473 [2024-07-15 14:59:07.330041] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:33.473 [2024-07-15 14:59:07.340333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.473 qpair failed and we were unable to recover it. 00:24:33.474 [2024-07-15 14:59:07.350078] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.474 [2024-07-15 14:59:07.350118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.474 [2024-07-15 14:59:07.350132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.474 [2024-07-15 14:59:07.350138] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.474 [2024-07-15 14:59:07.350144] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:33.474 [2024-07-15 14:59:07.360602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.474 qpair failed and we were unable to recover it. 00:24:33.474 [2024-07-15 14:59:07.370177] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.474 [2024-07-15 14:59:07.370216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.474 [2024-07-15 14:59:07.370230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.474 [2024-07-15 14:59:07.370236] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.474 [2024-07-15 14:59:07.370242] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:33.474 [2024-07-15 14:59:07.380624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.474 qpair failed and we were unable to recover it. 00:24:33.474 [2024-07-15 14:59:07.390189] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.474 [2024-07-15 14:59:07.390224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.474 [2024-07-15 14:59:07.390238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.474 [2024-07-15 14:59:07.390244] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.474 [2024-07-15 14:59:07.390250] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:33.732 [2024-07-15 14:59:07.400668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.732 qpair failed and we were unable to recover it. 00:24:33.732 [2024-07-15 14:59:07.410243] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.732 [2024-07-15 14:59:07.410279] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.732 [2024-07-15 14:59:07.410297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.732 [2024-07-15 14:59:07.410303] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.732 [2024-07-15 14:59:07.410309] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:33.732 [2024-07-15 14:59:07.420746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.732 qpair failed and we were unable to recover it. 00:24:33.732 [2024-07-15 14:59:07.430250] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.732 [2024-07-15 14:59:07.430281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.732 [2024-07-15 14:59:07.430295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.732 [2024-07-15 14:59:07.430301] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.732 [2024-07-15 14:59:07.430307] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:33.732 [2024-07-15 14:59:07.440646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.732 qpair failed and we were unable to recover it. 00:24:33.732 [2024-07-15 14:59:07.450379] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.732 [2024-07-15 14:59:07.450416] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.732 [2024-07-15 14:59:07.450430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.732 [2024-07-15 14:59:07.450436] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.732 [2024-07-15 14:59:07.450441] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:33.732 [2024-07-15 14:59:07.460781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.732 qpair failed and we were unable to recover it. 00:24:33.732 [2024-07-15 14:59:07.470274] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.732 [2024-07-15 14:59:07.470310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.732 [2024-07-15 14:59:07.470324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.732 [2024-07-15 14:59:07.470330] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.732 [2024-07-15 14:59:07.470336] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:33.732 [2024-07-15 14:59:07.480963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.732 qpair failed and we were unable to recover it. 00:24:33.732 [2024-07-15 14:59:07.490502] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.733 [2024-07-15 14:59:07.490547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.733 [2024-07-15 14:59:07.490561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.733 [2024-07-15 14:59:07.490569] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.733 [2024-07-15 14:59:07.490574] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:33.733 [2024-07-15 14:59:07.500990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.733 qpair failed and we were unable to recover it. 00:24:33.733 [2024-07-15 14:59:07.510419] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.733 [2024-07-15 14:59:07.510450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.733 [2024-07-15 14:59:07.510463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.733 [2024-07-15 14:59:07.510470] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.733 [2024-07-15 14:59:07.510475] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:33.733 [2024-07-15 14:59:07.520981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.733 qpair failed and we were unable to recover it. 00:24:33.733 [2024-07-15 14:59:07.530565] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.733 [2024-07-15 14:59:07.530603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.733 [2024-07-15 14:59:07.530617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.733 [2024-07-15 14:59:07.530624] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.733 [2024-07-15 14:59:07.530629] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:33.733 [2024-07-15 14:59:07.541092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.733 qpair failed and we were unable to recover it. 00:24:33.733 [2024-07-15 14:59:07.550609] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.733 [2024-07-15 14:59:07.550650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.733 [2024-07-15 14:59:07.550663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.733 [2024-07-15 14:59:07.550670] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.733 [2024-07-15 14:59:07.550675] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:33.733 [2024-07-15 14:59:07.561202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.733 qpair failed and we were unable to recover it. 00:24:33.733 [2024-07-15 14:59:07.570578] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.733 [2024-07-15 14:59:07.570618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.733 [2024-07-15 14:59:07.570632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.733 [2024-07-15 14:59:07.570639] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.733 [2024-07-15 14:59:07.570644] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:33.733 [2024-07-15 14:59:07.581141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.733 qpair failed and we were unable to recover it. 00:24:33.733 [2024-07-15 14:59:07.590656] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.733 [2024-07-15 14:59:07.590687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.733 [2024-07-15 14:59:07.590704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.733 [2024-07-15 14:59:07.590710] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.733 [2024-07-15 14:59:07.590716] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:33.733 [2024-07-15 14:59:07.601361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.733 qpair failed and we were unable to recover it. 00:24:33.733 [2024-07-15 14:59:07.610893] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.733 [2024-07-15 14:59:07.610930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.733 [2024-07-15 14:59:07.610944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.733 [2024-07-15 14:59:07.610950] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.733 [2024-07-15 14:59:07.610956] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:33.733 [2024-07-15 14:59:07.621248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.733 qpair failed and we were unable to recover it. 00:24:33.733 [2024-07-15 14:59:07.630900] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.733 [2024-07-15 14:59:07.630938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.733 [2024-07-15 14:59:07.630952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.733 [2024-07-15 14:59:07.630958] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.733 [2024-07-15 14:59:07.630964] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:33.733 [2024-07-15 14:59:07.641375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.733 qpair failed and we were unable to recover it. 00:24:33.733 [2024-07-15 14:59:07.651119] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.733 [2024-07-15 14:59:07.651152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.733 [2024-07-15 14:59:07.651183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.733 [2024-07-15 14:59:07.651190] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.733 [2024-07-15 14:59:07.651196] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:33.991 [2024-07-15 14:59:07.661488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.991 qpair failed and we were unable to recover it. 00:24:33.991 [2024-07-15 14:59:07.671149] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.991 [2024-07-15 14:59:07.671185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.991 [2024-07-15 14:59:07.671199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.991 [2024-07-15 14:59:07.671206] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.991 [2024-07-15 14:59:07.671215] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:33.991 [2024-07-15 14:59:07.681786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.991 qpair failed and we were unable to recover it. 00:24:33.991 [2024-07-15 14:59:07.691185] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.991 [2024-07-15 14:59:07.691224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.991 [2024-07-15 14:59:07.691238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.991 [2024-07-15 14:59:07.691244] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.991 [2024-07-15 14:59:07.691250] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:33.991 [2024-07-15 14:59:07.701569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.991 qpair failed and we were unable to recover it. 00:24:33.991 [2024-07-15 14:59:07.711273] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.991 [2024-07-15 14:59:07.711311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.991 [2024-07-15 14:59:07.711325] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.991 [2024-07-15 14:59:07.711331] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.991 [2024-07-15 14:59:07.711337] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:33.991 [2024-07-15 14:59:07.721705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.991 qpair failed and we were unable to recover it. 00:24:33.991 [2024-07-15 14:59:07.731267] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.991 [2024-07-15 14:59:07.731301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.991 [2024-07-15 14:59:07.731315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.991 [2024-07-15 14:59:07.731322] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.991 [2024-07-15 14:59:07.731328] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:33.991 [2024-07-15 14:59:07.741673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.991 qpair failed and we were unable to recover it. 00:24:33.991 [2024-07-15 14:59:07.751315] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.991 [2024-07-15 14:59:07.751346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.991 [2024-07-15 14:59:07.751360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.991 [2024-07-15 14:59:07.751366] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.991 [2024-07-15 14:59:07.751372] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:33.991 [2024-07-15 14:59:07.761822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.991 qpair failed and we were unable to recover it. 00:24:33.991 [2024-07-15 14:59:07.771317] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.991 [2024-07-15 14:59:07.771353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.991 [2024-07-15 14:59:07.771367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.991 [2024-07-15 14:59:07.771373] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.991 [2024-07-15 14:59:07.771379] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:33.991 [2024-07-15 14:59:07.781854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.991 qpair failed and we were unable to recover it. 00:24:33.991 [2024-07-15 14:59:07.791512] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.991 [2024-07-15 14:59:07.791554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.991 [2024-07-15 14:59:07.791569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.991 [2024-07-15 14:59:07.791575] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.991 [2024-07-15 14:59:07.791581] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:33.991 [2024-07-15 14:59:07.801943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.991 qpair failed and we were unable to recover it. 00:24:33.991 [2024-07-15 14:59:07.811490] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.991 [2024-07-15 14:59:07.811522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.991 [2024-07-15 14:59:07.811536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.991 [2024-07-15 14:59:07.811547] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.992 [2024-07-15 14:59:07.811553] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:33.992 [2024-07-15 14:59:07.821986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.992 qpair failed and we were unable to recover it. 00:24:33.992 [2024-07-15 14:59:07.831591] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.992 [2024-07-15 14:59:07.831631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.992 [2024-07-15 14:59:07.831645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.992 [2024-07-15 14:59:07.831652] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.992 [2024-07-15 14:59:07.831657] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:33.992 [2024-07-15 14:59:07.842150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.992 qpair failed and we were unable to recover it. 00:24:33.992 [2024-07-15 14:59:07.851603] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.992 [2024-07-15 14:59:07.851642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.992 [2024-07-15 14:59:07.851660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.992 [2024-07-15 14:59:07.851667] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.992 [2024-07-15 14:59:07.851672] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:33.992 [2024-07-15 14:59:07.861922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.992 qpair failed and we were unable to recover it. 00:24:33.992 [2024-07-15 14:59:07.871784] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.992 [2024-07-15 14:59:07.871822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.992 [2024-07-15 14:59:07.871836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.992 [2024-07-15 14:59:07.871842] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.992 [2024-07-15 14:59:07.871848] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:33.992 [2024-07-15 14:59:07.882210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.992 qpair failed and we were unable to recover it. 00:24:33.992 [2024-07-15 14:59:07.891726] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:33.992 [2024-07-15 14:59:07.891763] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:33.992 [2024-07-15 14:59:07.891777] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:33.992 [2024-07-15 14:59:07.891784] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:33.992 [2024-07-15 14:59:07.891790] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:33.992 [2024-07-15 14:59:07.902144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.992 qpair failed and we were unable to recover it. 00:24:34.250 [2024-07-15 14:59:07.911775] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.250 [2024-07-15 14:59:07.911812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.250 [2024-07-15 14:59:07.911826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.250 [2024-07-15 14:59:07.911832] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.250 [2024-07-15 14:59:07.911838] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:34.251 [2024-07-15 14:59:07.922139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:34.251 qpair failed and we were unable to recover it. 00:24:34.251 [2024-07-15 14:59:07.931800] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.251 [2024-07-15 14:59:07.931838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.251 [2024-07-15 14:59:07.931853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.251 [2024-07-15 14:59:07.931859] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.251 [2024-07-15 14:59:07.931865] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:34.251 [2024-07-15 14:59:07.942012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:34.251 qpair failed and we were unable to recover it. 00:24:34.251 [2024-07-15 14:59:07.951866] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.251 [2024-07-15 14:59:07.951904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.251 [2024-07-15 14:59:07.951919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.251 [2024-07-15 14:59:07.951925] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.251 [2024-07-15 14:59:07.951931] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:34.251 [2024-07-15 14:59:07.962222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:34.251 qpair failed and we were unable to recover it. 00:24:34.251 [2024-07-15 14:59:07.972028] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.251 [2024-07-15 14:59:07.972069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.251 [2024-07-15 14:59:07.972083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.251 [2024-07-15 14:59:07.972090] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.251 [2024-07-15 14:59:07.972096] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:34.251 [2024-07-15 14:59:07.982269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:34.251 qpair failed and we were unable to recover it. 00:24:34.251 [2024-07-15 14:59:07.992031] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.251 [2024-07-15 14:59:07.992068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.251 [2024-07-15 14:59:07.992082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.251 [2024-07-15 14:59:07.992088] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.251 [2024-07-15 14:59:07.992094] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:34.251 [2024-07-15 14:59:08.002297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:34.251 qpair failed and we were unable to recover it. 00:24:34.251 [2024-07-15 14:59:08.012020] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.251 [2024-07-15 14:59:08.012056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.251 [2024-07-15 14:59:08.012070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.251 [2024-07-15 14:59:08.012076] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.251 [2024-07-15 14:59:08.012082] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:34.251 [2024-07-15 14:59:08.022472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:34.251 qpair failed and we were unable to recover it. 00:24:34.251 [2024-07-15 14:59:08.032170] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.251 [2024-07-15 14:59:08.032207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.251 [2024-07-15 14:59:08.032224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.251 [2024-07-15 14:59:08.032230] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.251 [2024-07-15 14:59:08.032236] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:34.251 [2024-07-15 14:59:08.042527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:34.251 qpair failed and we were unable to recover it. 00:24:34.251 [2024-07-15 14:59:08.052173] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.251 [2024-07-15 14:59:08.052211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.251 [2024-07-15 14:59:08.052225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.251 [2024-07-15 14:59:08.052231] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.251 [2024-07-15 14:59:08.052237] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:34.251 [2024-07-15 14:59:08.062478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:34.251 qpair failed and we were unable to recover it. 00:24:34.251 [2024-07-15 14:59:08.072286] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.251 [2024-07-15 14:59:08.072321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.251 [2024-07-15 14:59:08.072334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.251 [2024-07-15 14:59:08.072341] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.251 [2024-07-15 14:59:08.072347] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:34.251 [2024-07-15 14:59:08.082623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:34.251 qpair failed and we were unable to recover it. 00:24:34.251 [2024-07-15 14:59:08.092364] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.251 [2024-07-15 14:59:08.092403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.251 [2024-07-15 14:59:08.092418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.251 [2024-07-15 14:59:08.092424] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.251 [2024-07-15 14:59:08.092430] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:34.251 [2024-07-15 14:59:08.102572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:34.251 qpair failed and we were unable to recover it. 00:24:34.251 [2024-07-15 14:59:08.112429] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.251 [2024-07-15 14:59:08.112468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.251 [2024-07-15 14:59:08.112482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.251 [2024-07-15 14:59:08.112489] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.251 [2024-07-15 14:59:08.112497] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:34.251 [2024-07-15 14:59:08.122690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:34.251 qpair failed and we were unable to recover it. 00:24:34.251 [2024-07-15 14:59:08.132565] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.251 [2024-07-15 14:59:08.132606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.251 [2024-07-15 14:59:08.132620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.251 [2024-07-15 14:59:08.132626] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.251 [2024-07-15 14:59:08.132632] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:34.251 [2024-07-15 14:59:08.142655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:34.251 qpair failed and we were unable to recover it. 00:24:34.251 [2024-07-15 14:59:08.152428] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.251 [2024-07-15 14:59:08.152461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.251 [2024-07-15 14:59:08.152475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.251 [2024-07-15 14:59:08.152481] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.251 [2024-07-15 14:59:08.152487] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:34.251 [2024-07-15 14:59:08.162815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:34.251 qpair failed and we were unable to recover it. 00:24:34.510 [2024-07-15 14:59:08.172570] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.510 [2024-07-15 14:59:08.172607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.510 [2024-07-15 14:59:08.172621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.510 [2024-07-15 14:59:08.172627] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.510 [2024-07-15 14:59:08.172633] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:34.510 [2024-07-15 14:59:08.182836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:34.510 qpair failed and we were unable to recover it. 00:24:34.510 [2024-07-15 14:59:08.192577] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.510 [2024-07-15 14:59:08.192619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.510 [2024-07-15 14:59:08.192633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.510 [2024-07-15 14:59:08.192639] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.510 [2024-07-15 14:59:08.192645] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:34.510 [2024-07-15 14:59:08.202983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:34.510 qpair failed and we were unable to recover it. 00:24:34.510 [2024-07-15 14:59:08.212735] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.510 [2024-07-15 14:59:08.212774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.510 [2024-07-15 14:59:08.212788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.510 [2024-07-15 14:59:08.212795] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.510 [2024-07-15 14:59:08.212800] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:34.510 [2024-07-15 14:59:08.222899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:34.510 qpair failed and we were unable to recover it. 00:24:34.510 [2024-07-15 14:59:08.232700] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.510 [2024-07-15 14:59:08.232736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.510 [2024-07-15 14:59:08.232750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.510 [2024-07-15 14:59:08.232756] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.510 [2024-07-15 14:59:08.232762] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:34.510 [2024-07-15 14:59:08.243109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:34.510 qpair failed and we were unable to recover it. 00:24:34.510 [2024-07-15 14:59:08.252759] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.510 [2024-07-15 14:59:08.252800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.510 [2024-07-15 14:59:08.252814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.510 [2024-07-15 14:59:08.252821] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.510 [2024-07-15 14:59:08.252826] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:34.510 [2024-07-15 14:59:08.263113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:34.510 qpair failed and we were unable to recover it. 00:24:34.510 [2024-07-15 14:59:08.272855] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.510 [2024-07-15 14:59:08.272896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.510 [2024-07-15 14:59:08.272910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.510 [2024-07-15 14:59:08.272916] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.510 [2024-07-15 14:59:08.272922] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:34.510 [2024-07-15 14:59:08.283181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:34.510 qpair failed and we were unable to recover it. 00:24:34.510 [2024-07-15 14:59:08.292786] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.510 [2024-07-15 14:59:08.292820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.510 [2024-07-15 14:59:08.292837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.510 [2024-07-15 14:59:08.292844] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.510 [2024-07-15 14:59:08.292849] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:34.510 [2024-07-15 14:59:08.303303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:34.510 qpair failed and we were unable to recover it. 00:24:34.510 [2024-07-15 14:59:08.312931] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.510 [2024-07-15 14:59:08.312965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.510 [2024-07-15 14:59:08.312979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.510 [2024-07-15 14:59:08.312986] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.510 [2024-07-15 14:59:08.312992] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:34.510 [2024-07-15 14:59:08.323705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:34.510 qpair failed and we were unable to recover it. 00:24:34.510 [2024-07-15 14:59:08.332983] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.510 [2024-07-15 14:59:08.333022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.510 [2024-07-15 14:59:08.333037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.510 [2024-07-15 14:59:08.333044] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.510 [2024-07-15 14:59:08.333050] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:34.510 [2024-07-15 14:59:08.343276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:34.510 qpair failed and we were unable to recover it. 00:24:34.510 [2024-07-15 14:59:08.353031] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.510 [2024-07-15 14:59:08.353069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.510 [2024-07-15 14:59:08.353084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.510 [2024-07-15 14:59:08.353090] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.510 [2024-07-15 14:59:08.353096] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:34.510 [2024-07-15 14:59:08.363410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:34.510 qpair failed and we were unable to recover it. 00:24:34.510 [2024-07-15 14:59:08.373231] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.510 [2024-07-15 14:59:08.373271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.510 [2024-07-15 14:59:08.373285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.510 [2024-07-15 14:59:08.373292] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.510 [2024-07-15 14:59:08.373298] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:34.510 [2024-07-15 14:59:08.383530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:34.510 qpair failed and we were unable to recover it. 00:24:34.510 [2024-07-15 14:59:08.392998] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.510 [2024-07-15 14:59:08.393036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.510 [2024-07-15 14:59:08.393050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.510 [2024-07-15 14:59:08.393057] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.510 [2024-07-15 14:59:08.393065] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:34.510 [2024-07-15 14:59:08.403568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:34.510 qpair failed and we were unable to recover it. 00:24:34.510 [2024-07-15 14:59:08.413182] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.510 [2024-07-15 14:59:08.413221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.510 [2024-07-15 14:59:08.413236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.510 [2024-07-15 14:59:08.413243] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.510 [2024-07-15 14:59:08.413248] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:34.510 [2024-07-15 14:59:08.423574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:34.510 qpair failed and we were unable to recover it. 00:24:34.768 [2024-07-15 14:59:08.433270] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.768 [2024-07-15 14:59:08.433307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.768 [2024-07-15 14:59:08.433321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.768 [2024-07-15 14:59:08.433328] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.769 [2024-07-15 14:59:08.433349] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:34.769 [2024-07-15 14:59:08.443718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:34.769 qpair failed and we were unable to recover it. 00:24:34.769 [2024-07-15 14:59:08.453344] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.769 [2024-07-15 14:59:08.453385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.769 [2024-07-15 14:59:08.453400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.769 [2024-07-15 14:59:08.453406] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.769 [2024-07-15 14:59:08.453412] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:34.769 [2024-07-15 14:59:08.463678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:34.769 qpair failed and we were unable to recover it. 00:24:34.769 [2024-07-15 14:59:08.473501] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.769 [2024-07-15 14:59:08.473533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.769 [2024-07-15 14:59:08.473561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.769 [2024-07-15 14:59:08.473569] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.769 [2024-07-15 14:59:08.473575] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:34.769 [2024-07-15 14:59:08.483919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:34.769 qpair failed and we were unable to recover it. 00:24:34.769 [2024-07-15 14:59:08.493446] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.769 [2024-07-15 14:59:08.493486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.769 [2024-07-15 14:59:08.493500] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.769 [2024-07-15 14:59:08.493507] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.769 [2024-07-15 14:59:08.493512] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:34.769 [2024-07-15 14:59:08.503855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:34.769 qpair failed and we were unable to recover it. 00:24:34.769 [2024-07-15 14:59:08.513429] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.769 [2024-07-15 14:59:08.513470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.769 [2024-07-15 14:59:08.513484] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.769 [2024-07-15 14:59:08.513491] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.769 [2024-07-15 14:59:08.513497] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:34.769 [2024-07-15 14:59:08.524068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:34.769 qpair failed and we were unable to recover it. 00:24:34.769 [2024-07-15 14:59:08.533548] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.769 [2024-07-15 14:59:08.533583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.769 [2024-07-15 14:59:08.533597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.769 [2024-07-15 14:59:08.533604] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.769 [2024-07-15 14:59:08.533609] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:34.769 [2024-07-15 14:59:08.544078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:34.769 qpair failed and we were unable to recover it. 00:24:34.769 [2024-07-15 14:59:08.553631] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.769 [2024-07-15 14:59:08.553666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.769 [2024-07-15 14:59:08.553680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.769 [2024-07-15 14:59:08.553687] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.769 [2024-07-15 14:59:08.553696] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:34.769 [2024-07-15 14:59:08.564010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:34.769 qpair failed and we were unable to recover it. 00:24:34.769 [2024-07-15 14:59:08.573759] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.769 [2024-07-15 14:59:08.573797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.769 [2024-07-15 14:59:08.573811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.769 [2024-07-15 14:59:08.573818] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.769 [2024-07-15 14:59:08.573824] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:34.769 [2024-07-15 14:59:08.584135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:34.769 qpair failed and we were unable to recover it. 00:24:34.769 [2024-07-15 14:59:08.593710] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.769 [2024-07-15 14:59:08.593749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.769 [2024-07-15 14:59:08.593762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.769 [2024-07-15 14:59:08.593769] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.769 [2024-07-15 14:59:08.593774] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:34.769 [2024-07-15 14:59:08.604352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:34.769 qpair failed and we were unable to recover it. 00:24:34.769 [2024-07-15 14:59:08.613820] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.769 [2024-07-15 14:59:08.613859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.769 [2024-07-15 14:59:08.613873] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.769 [2024-07-15 14:59:08.613879] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.769 [2024-07-15 14:59:08.613885] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:34.769 [2024-07-15 14:59:08.624319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:34.769 qpair failed and we were unable to recover it. 00:24:34.769 [2024-07-15 14:59:08.633895] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.769 [2024-07-15 14:59:08.633931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.769 [2024-07-15 14:59:08.633945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.769 [2024-07-15 14:59:08.633951] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.769 [2024-07-15 14:59:08.633957] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:34.769 [2024-07-15 14:59:08.644430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:34.769 qpair failed and we were unable to recover it. 00:24:34.769 [2024-07-15 14:59:08.653943] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.769 [2024-07-15 14:59:08.653982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.769 [2024-07-15 14:59:08.653996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.769 [2024-07-15 14:59:08.654002] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.769 [2024-07-15 14:59:08.654008] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:34.769 [2024-07-15 14:59:08.664284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:34.769 qpair failed and we were unable to recover it. 00:24:34.769 [2024-07-15 14:59:08.673984] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:34.769 [2024-07-15 14:59:08.674021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:34.769 [2024-07-15 14:59:08.674036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:34.769 [2024-07-15 14:59:08.674042] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:34.769 [2024-07-15 14:59:08.674048] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:34.769 [2024-07-15 14:59:08.684376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:34.769 qpair failed and we were unable to recover it. 00:24:35.028 [2024-07-15 14:59:08.694097] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:35.028 [2024-07-15 14:59:08.694133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:35.028 [2024-07-15 14:59:08.694147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:35.028 [2024-07-15 14:59:08.694154] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:35.028 [2024-07-15 14:59:08.694160] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:35.028 [2024-07-15 14:59:08.704533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:35.028 qpair failed and we were unable to recover it. 00:24:35.028 [2024-07-15 14:59:08.714041] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:35.028 [2024-07-15 14:59:08.714075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:35.028 [2024-07-15 14:59:08.714090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:35.028 [2024-07-15 14:59:08.714096] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:35.028 [2024-07-15 14:59:08.714102] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:35.028 [2024-07-15 14:59:08.724604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:35.028 qpair failed and we were unable to recover it. 00:24:35.028 [2024-07-15 14:59:08.734084] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:35.028 [2024-07-15 14:59:08.734119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:35.028 [2024-07-15 14:59:08.734137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:35.028 [2024-07-15 14:59:08.734143] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:35.028 [2024-07-15 14:59:08.734148] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:35.028 [2024-07-15 14:59:08.744704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:35.028 qpair failed and we were unable to recover it. 00:24:35.028 [2024-07-15 14:59:08.754243] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:35.028 [2024-07-15 14:59:08.754285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:35.028 [2024-07-15 14:59:08.754299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:35.028 [2024-07-15 14:59:08.754306] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:35.028 [2024-07-15 14:59:08.754311] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:35.028 [2024-07-15 14:59:08.764633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:35.028 qpair failed and we were unable to recover it. 00:24:35.028 [2024-07-15 14:59:08.774233] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:35.028 [2024-07-15 14:59:08.774267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:35.028 [2024-07-15 14:59:08.774281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:35.028 [2024-07-15 14:59:08.774288] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:35.028 [2024-07-15 14:59:08.774293] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:35.028 [2024-07-15 14:59:08.784876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:35.028 qpair failed and we were unable to recover it. 00:24:35.028 [2024-07-15 14:59:08.794376] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:35.028 [2024-07-15 14:59:08.794417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:35.028 [2024-07-15 14:59:08.794431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:35.028 [2024-07-15 14:59:08.794437] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:35.028 [2024-07-15 14:59:08.794443] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:35.028 [2024-07-15 14:59:08.804890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:35.028 qpair failed and we were unable to recover it. 00:24:35.028 [2024-07-15 14:59:08.814394] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:35.028 [2024-07-15 14:59:08.814432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:35.028 [2024-07-15 14:59:08.814446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:35.028 [2024-07-15 14:59:08.814452] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:35.028 [2024-07-15 14:59:08.814458] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:35.028 [2024-07-15 14:59:08.824898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:35.028 qpair failed and we were unable to recover it. 00:24:35.028 [2024-07-15 14:59:08.834633] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:35.028 [2024-07-15 14:59:08.834671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:35.028 [2024-07-15 14:59:08.834685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:35.028 [2024-07-15 14:59:08.834691] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:35.028 [2024-07-15 14:59:08.834697] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:35.028 [2024-07-15 14:59:08.844984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:35.028 qpair failed and we were unable to recover it. 00:24:35.028 [2024-07-15 14:59:08.854476] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:35.028 [2024-07-15 14:59:08.854509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:35.028 [2024-07-15 14:59:08.854522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:35.028 [2024-07-15 14:59:08.854529] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:35.028 [2024-07-15 14:59:08.854534] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:35.028 [2024-07-15 14:59:08.865087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:35.028 qpair failed and we were unable to recover it. 00:24:35.028 [2024-07-15 14:59:08.874641] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:35.028 [2024-07-15 14:59:08.874673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:35.028 [2024-07-15 14:59:08.874687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:35.028 [2024-07-15 14:59:08.874694] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:35.028 [2024-07-15 14:59:08.874699] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:35.028 [2024-07-15 14:59:08.885093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:35.028 qpair failed and we were unable to recover it. 00:24:35.028 [2024-07-15 14:59:08.894691] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:35.028 [2024-07-15 14:59:08.894727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:35.028 [2024-07-15 14:59:08.894741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:35.029 [2024-07-15 14:59:08.894748] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:35.029 [2024-07-15 14:59:08.894753] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:35.029 [2024-07-15 14:59:08.905108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:35.029 qpair failed and we were unable to recover it. 00:24:35.029 [2024-07-15 14:59:08.914745] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:35.029 [2024-07-15 14:59:08.914780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:35.029 [2024-07-15 14:59:08.914798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:35.029 [2024-07-15 14:59:08.914805] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:35.029 [2024-07-15 14:59:08.914811] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:35.029 [2024-07-15 14:59:08.925355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:35.029 qpair failed and we were unable to recover it. 00:24:35.029 [2024-07-15 14:59:08.934750] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:35.029 [2024-07-15 14:59:08.934788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:35.029 [2024-07-15 14:59:08.934802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:35.029 [2024-07-15 14:59:08.934808] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:35.029 [2024-07-15 14:59:08.934814] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:35.029 [2024-07-15 14:59:08.945386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:35.029 qpair failed and we were unable to recover it. 00:24:35.286 [2024-07-15 14:59:08.954859] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:35.286 [2024-07-15 14:59:08.954890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:35.286 [2024-07-15 14:59:08.954904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:35.286 [2024-07-15 14:59:08.954911] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:35.286 [2024-07-15 14:59:08.954917] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:35.286 [2024-07-15 14:59:08.965706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:35.286 qpair failed and we were unable to recover it. 00:24:35.286 [2024-07-15 14:59:08.974901] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:35.286 [2024-07-15 14:59:08.974939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:35.286 [2024-07-15 14:59:08.974954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:35.286 [2024-07-15 14:59:08.974960] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:35.286 [2024-07-15 14:59:08.974966] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:35.286 [2024-07-15 14:59:08.985562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:35.286 qpair failed and we were unable to recover it. 00:24:35.286 [2024-07-15 14:59:08.995095] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:35.286 [2024-07-15 14:59:08.995139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:35.286 [2024-07-15 14:59:08.995153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:35.286 [2024-07-15 14:59:08.995160] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:35.286 [2024-07-15 14:59:08.995169] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:35.286 [2024-07-15 14:59:09.005526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:35.286 qpair failed and we were unable to recover it. 00:24:35.286 [2024-07-15 14:59:09.015011] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:35.286 [2024-07-15 14:59:09.015044] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:35.286 [2024-07-15 14:59:09.015058] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:35.286 [2024-07-15 14:59:09.015064] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:35.286 [2024-07-15 14:59:09.015070] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:35.286 [2024-07-15 14:59:09.025397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:35.286 qpair failed and we were unable to recover it. 00:24:35.286 [2024-07-15 14:59:09.035168] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:35.286 [2024-07-15 14:59:09.035210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:35.286 [2024-07-15 14:59:09.035224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:35.286 [2024-07-15 14:59:09.035230] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:35.286 [2024-07-15 14:59:09.035236] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:35.286 [2024-07-15 14:59:09.045569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:35.286 qpair failed and we were unable to recover it. 00:24:35.286 [2024-07-15 14:59:09.055181] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:35.287 [2024-07-15 14:59:09.055218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:35.287 [2024-07-15 14:59:09.055232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:35.287 [2024-07-15 14:59:09.055238] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:35.287 [2024-07-15 14:59:09.055244] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:35.287 [2024-07-15 14:59:09.065562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:35.287 qpair failed and we were unable to recover it. 00:24:35.287 [2024-07-15 14:59:09.075244] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:35.287 [2024-07-15 14:59:09.075280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:35.287 [2024-07-15 14:59:09.075294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:35.287 [2024-07-15 14:59:09.075300] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:35.287 [2024-07-15 14:59:09.075306] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:35.287 [2024-07-15 14:59:09.085807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:35.287 qpair failed and we were unable to recover it. 00:24:35.287 [2024-07-15 14:59:09.095212] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:35.287 [2024-07-15 14:59:09.095248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:35.287 [2024-07-15 14:59:09.095262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:35.287 [2024-07-15 14:59:09.095269] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:35.287 [2024-07-15 14:59:09.095275] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:35.287 [2024-07-15 14:59:09.105762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:35.287 qpair failed and we were unable to recover it. 00:24:35.287 [2024-07-15 14:59:09.115258] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:35.287 [2024-07-15 14:59:09.115299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:35.287 [2024-07-15 14:59:09.115313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:35.287 [2024-07-15 14:59:09.115319] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:35.287 [2024-07-15 14:59:09.115325] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:35.287 [2024-07-15 14:59:09.125973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:35.287 qpair failed and we were unable to recover it. 00:24:35.287 [2024-07-15 14:59:09.135361] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:35.287 [2024-07-15 14:59:09.135397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:35.287 [2024-07-15 14:59:09.135411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:35.287 [2024-07-15 14:59:09.135417] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:35.287 [2024-07-15 14:59:09.135423] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:35.287 [2024-07-15 14:59:09.145811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:35.287 qpair failed and we were unable to recover it. 00:24:35.287 [2024-07-15 14:59:09.155459] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:35.287 [2024-07-15 14:59:09.155505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:35.287 [2024-07-15 14:59:09.155520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:35.287 [2024-07-15 14:59:09.155526] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:35.287 [2024-07-15 14:59:09.155532] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:35.287 [2024-07-15 14:59:09.165941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:35.287 qpair failed and we were unable to recover it. 00:24:35.287 [2024-07-15 14:59:09.175575] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:35.287 [2024-07-15 14:59:09.175609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:35.287 [2024-07-15 14:59:09.175626] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:35.287 [2024-07-15 14:59:09.175633] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:35.287 [2024-07-15 14:59:09.175638] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:35.287 [2024-07-15 14:59:09.185959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:35.287 qpair failed and we were unable to recover it. 00:24:35.287 [2024-07-15 14:59:09.195612] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:35.287 [2024-07-15 14:59:09.195650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:35.287 [2024-07-15 14:59:09.195664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:35.287 [2024-07-15 14:59:09.195670] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:35.287 [2024-07-15 14:59:09.195676] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:35.545 [2024-07-15 14:59:09.206141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:35.545 qpair failed and we were unable to recover it. 00:24:35.545 [2024-07-15 14:59:09.215649] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:35.545 [2024-07-15 14:59:09.215686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:35.545 [2024-07-15 14:59:09.215701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:35.545 [2024-07-15 14:59:09.215707] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:35.545 [2024-07-15 14:59:09.215712] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:35.545 [2024-07-15 14:59:09.226108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:35.545 qpair failed and we were unable to recover it. 00:24:35.546 [2024-07-15 14:59:09.235762] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:35.546 [2024-07-15 14:59:09.235803] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:35.546 [2024-07-15 14:59:09.235817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:35.546 [2024-07-15 14:59:09.235823] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:35.546 [2024-07-15 14:59:09.235829] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:35.546 [2024-07-15 14:59:09.246301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:35.546 qpair failed and we were unable to recover it. 00:24:35.546 [2024-07-15 14:59:09.255804] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:35.546 [2024-07-15 14:59:09.255837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:35.546 [2024-07-15 14:59:09.255851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:35.546 [2024-07-15 14:59:09.255858] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:35.546 [2024-07-15 14:59:09.255863] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:35.546 [2024-07-15 14:59:09.266189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:35.546 qpair failed and we were unable to recover it. 00:24:35.546 [2024-07-15 14:59:09.275920] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:35.546 [2024-07-15 14:59:09.275960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:35.546 [2024-07-15 14:59:09.275974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:35.546 [2024-07-15 14:59:09.275981] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:35.546 [2024-07-15 14:59:09.275986] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:35.546 [2024-07-15 14:59:09.286263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:35.546 qpair failed and we were unable to recover it. 00:24:35.546 [2024-07-15 14:59:09.296036] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:35.546 [2024-07-15 14:59:09.296075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:35.546 [2024-07-15 14:59:09.296089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:35.546 [2024-07-15 14:59:09.296096] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:35.546 [2024-07-15 14:59:09.296102] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:35.546 [2024-07-15 14:59:09.306454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:35.546 qpair failed and we were unable to recover it. 00:24:35.546 [2024-07-15 14:59:09.316067] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:35.546 [2024-07-15 14:59:09.316106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:35.546 [2024-07-15 14:59:09.316120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:35.546 [2024-07-15 14:59:09.316127] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:35.546 [2024-07-15 14:59:09.316132] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:35.546 [2024-07-15 14:59:09.326478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:35.546 qpair failed and we were unable to recover it. 00:24:35.546 [2024-07-15 14:59:09.336054] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:35.546 [2024-07-15 14:59:09.336088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:35.546 [2024-07-15 14:59:09.336101] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:35.546 [2024-07-15 14:59:09.336108] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:35.546 [2024-07-15 14:59:09.336114] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:35.546 [2024-07-15 14:59:09.346549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:35.546 qpair failed and we were unable to recover it. 00:24:35.546 [2024-07-15 14:59:09.356075] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:35.546 [2024-07-15 14:59:09.356109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:35.546 [2024-07-15 14:59:09.356126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:35.546 [2024-07-15 14:59:09.356132] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:35.546 [2024-07-15 14:59:09.356137] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:35.546 [2024-07-15 14:59:09.366514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:35.546 qpair failed and we were unable to recover it. 00:24:35.546 [2024-07-15 14:59:09.376205] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:35.546 [2024-07-15 14:59:09.376242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:35.546 [2024-07-15 14:59:09.376256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:35.546 [2024-07-15 14:59:09.376262] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:35.546 [2024-07-15 14:59:09.376268] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:35.546 [2024-07-15 14:59:09.386685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:35.546 qpair failed and we were unable to recover it. 00:24:35.546 [2024-07-15 14:59:09.396340] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:35.546 [2024-07-15 14:59:09.396377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:35.546 [2024-07-15 14:59:09.396391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:35.546 [2024-07-15 14:59:09.396397] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:35.546 [2024-07-15 14:59:09.396403] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:35.546 [2024-07-15 14:59:09.406774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:35.546 qpair failed and we were unable to recover it. 00:24:35.546 [2024-07-15 14:59:09.416316] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:35.546 [2024-07-15 14:59:09.416355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:35.546 [2024-07-15 14:59:09.416370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:35.546 [2024-07-15 14:59:09.416377] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:35.546 [2024-07-15 14:59:09.416382] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:35.546 [2024-07-15 14:59:09.426722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:35.546 qpair failed and we were unable to recover it. 00:24:35.546 [2024-07-15 14:59:09.436352] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:35.546 [2024-07-15 14:59:09.436386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:35.546 [2024-07-15 14:59:09.436401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:35.546 [2024-07-15 14:59:09.436407] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:35.546 [2024-07-15 14:59:09.436416] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:35.546 [2024-07-15 14:59:09.446750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:35.546 qpair failed and we were unable to recover it. 00:24:35.546 [2024-07-15 14:59:09.456472] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:35.546 [2024-07-15 14:59:09.456512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:35.546 [2024-07-15 14:59:09.456526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:35.546 [2024-07-15 14:59:09.456533] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:35.546 [2024-07-15 14:59:09.456543] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:35.804 [2024-07-15 14:59:09.466857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:35.804 qpair failed and we were unable to recover it. 00:24:35.804 [2024-07-15 14:59:09.476487] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:35.804 [2024-07-15 14:59:09.476527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:35.804 [2024-07-15 14:59:09.476545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:35.804 [2024-07-15 14:59:09.476552] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:35.804 [2024-07-15 14:59:09.476559] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:35.804 [2024-07-15 14:59:09.486802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:35.804 qpair failed and we were unable to recover it. 00:24:35.804 [2024-07-15 14:59:09.496619] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:35.804 [2024-07-15 14:59:09.496662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:35.804 [2024-07-15 14:59:09.496676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:35.804 [2024-07-15 14:59:09.496682] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:35.804 [2024-07-15 14:59:09.496687] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:35.804 [2024-07-15 14:59:09.507005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:35.804 qpair failed and we were unable to recover it. 00:24:35.804 [2024-07-15 14:59:09.516533] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:35.804 [2024-07-15 14:59:09.516575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:35.804 [2024-07-15 14:59:09.516589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:35.804 [2024-07-15 14:59:09.516595] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:35.804 [2024-07-15 14:59:09.516601] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:35.804 [2024-07-15 14:59:09.527088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:35.804 qpair failed and we were unable to recover it. 00:24:35.804 [2024-07-15 14:59:09.536641] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:35.804 [2024-07-15 14:59:09.536679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:35.804 [2024-07-15 14:59:09.536693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:35.804 [2024-07-15 14:59:09.536700] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:35.804 [2024-07-15 14:59:09.536705] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:35.804 [2024-07-15 14:59:09.547144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:35.804 qpair failed and we were unable to recover it. 00:24:35.804 [2024-07-15 14:59:09.556855] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:35.804 [2024-07-15 14:59:09.556889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:35.804 [2024-07-15 14:59:09.556903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:35.804 [2024-07-15 14:59:09.556910] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:35.804 [2024-07-15 14:59:09.556916] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:35.804 [2024-07-15 14:59:09.567192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:35.804 qpair failed and we were unable to recover it. 00:24:35.804 [2024-07-15 14:59:09.576829] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:35.804 [2024-07-15 14:59:09.576862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:35.804 [2024-07-15 14:59:09.576876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:35.804 [2024-07-15 14:59:09.576882] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:35.804 [2024-07-15 14:59:09.576888] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:35.804 [2024-07-15 14:59:09.587241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:35.804 qpair failed and we were unable to recover it. 00:24:35.804 [2024-07-15 14:59:09.596850] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:35.804 [2024-07-15 14:59:09.596886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:35.804 [2024-07-15 14:59:09.596900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:35.804 [2024-07-15 14:59:09.596906] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:35.804 [2024-07-15 14:59:09.596912] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:35.804 [2024-07-15 14:59:09.607806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:35.805 qpair failed and we were unable to recover it. 00:24:35.805 [2024-07-15 14:59:09.617017] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:35.805 [2024-07-15 14:59:09.617055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:35.805 [2024-07-15 14:59:09.617072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:35.805 [2024-07-15 14:59:09.617079] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:35.805 [2024-07-15 14:59:09.617085] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:35.805 [2024-07-15 14:59:09.627355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:35.805 qpair failed and we were unable to recover it. 00:24:35.805 [2024-07-15 14:59:09.637022] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:35.805 [2024-07-15 14:59:09.637063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:35.805 [2024-07-15 14:59:09.637077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:35.805 [2024-07-15 14:59:09.637084] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:35.805 [2024-07-15 14:59:09.637090] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:35.805 [2024-07-15 14:59:09.647484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:35.805 qpair failed and we were unable to recover it. 00:24:35.805 [2024-07-15 14:59:09.657014] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:35.805 [2024-07-15 14:59:09.657049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:35.805 [2024-07-15 14:59:09.657063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:35.805 [2024-07-15 14:59:09.657069] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:35.805 [2024-07-15 14:59:09.657075] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:35.805 [2024-07-15 14:59:09.667433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:35.805 qpair failed and we were unable to recover it. 00:24:35.805 [2024-07-15 14:59:09.677149] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:35.805 [2024-07-15 14:59:09.677182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:35.805 [2024-07-15 14:59:09.677197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:35.805 [2024-07-15 14:59:09.677203] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:35.805 [2024-07-15 14:59:09.677209] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:35.805 [2024-07-15 14:59:09.687651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:35.805 qpair failed and we were unable to recover it. 00:24:35.805 [2024-07-15 14:59:09.697127] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:35.805 [2024-07-15 14:59:09.697166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:35.805 [2024-07-15 14:59:09.697180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:35.805 [2024-07-15 14:59:09.697186] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:35.805 [2024-07-15 14:59:09.697192] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:35.805 [2024-07-15 14:59:09.707484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:35.805 qpair failed and we were unable to recover it. 00:24:35.805 [2024-07-15 14:59:09.717250] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:35.805 [2024-07-15 14:59:09.717292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:35.805 [2024-07-15 14:59:09.717306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:35.805 [2024-07-15 14:59:09.717312] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:35.805 [2024-07-15 14:59:09.717318] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:36.063 [2024-07-15 14:59:09.727630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:36.063 qpair failed and we were unable to recover it. 00:24:36.063 [2024-07-15 14:59:09.737291] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:36.063 [2024-07-15 14:59:09.737329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:36.063 [2024-07-15 14:59:09.737344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:36.063 [2024-07-15 14:59:09.737350] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:36.063 [2024-07-15 14:59:09.737355] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:36.063 [2024-07-15 14:59:09.747684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:36.063 qpair failed and we were unable to recover it. 00:24:36.063 [2024-07-15 14:59:09.757305] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:36.063 [2024-07-15 14:59:09.757344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:36.063 [2024-07-15 14:59:09.757358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:36.063 [2024-07-15 14:59:09.757364] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:36.063 [2024-07-15 14:59:09.757370] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:36.063 [2024-07-15 14:59:09.767702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:36.063 qpair failed and we were unable to recover it. 00:24:36.063 [2024-07-15 14:59:09.777336] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:36.063 [2024-07-15 14:59:09.777376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:36.063 [2024-07-15 14:59:09.777390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:36.063 [2024-07-15 14:59:09.777396] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:36.063 [2024-07-15 14:59:09.777402] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:36.063 [2024-07-15 14:59:09.787912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:36.063 qpair failed and we were unable to recover it. 00:24:36.063 [2024-07-15 14:59:09.797575] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:36.063 [2024-07-15 14:59:09.797612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:36.063 [2024-07-15 14:59:09.797630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:36.063 [2024-07-15 14:59:09.797636] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:36.063 [2024-07-15 14:59:09.797642] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:36.063 [2024-07-15 14:59:09.807842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:36.063 qpair failed and we were unable to recover it. 00:24:36.063 [2024-07-15 14:59:09.817432] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:36.063 [2024-07-15 14:59:09.817473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:36.063 [2024-07-15 14:59:09.817487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:36.063 [2024-07-15 14:59:09.817494] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:36.063 [2024-07-15 14:59:09.817499] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:36.063 [2024-07-15 14:59:09.828028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:36.063 qpair failed and we were unable to recover it. 00:24:36.063 [2024-07-15 14:59:09.837547] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:36.063 [2024-07-15 14:59:09.837584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:36.063 [2024-07-15 14:59:09.837598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:36.063 [2024-07-15 14:59:09.837605] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:36.063 [2024-07-15 14:59:09.837610] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:36.063 [2024-07-15 14:59:09.848093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:36.063 qpair failed and we were unable to recover it. 00:24:36.063 [2024-07-15 14:59:09.857618] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:36.063 [2024-07-15 14:59:09.857657] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:36.063 [2024-07-15 14:59:09.857672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:36.063 [2024-07-15 14:59:09.857678] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:36.063 [2024-07-15 14:59:09.857684] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:36.063 [2024-07-15 14:59:09.868062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:36.063 qpair failed and we were unable to recover it. 00:24:36.063 [2024-07-15 14:59:09.877827] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:36.063 [2024-07-15 14:59:09.877862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:36.063 [2024-07-15 14:59:09.877876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:36.063 [2024-07-15 14:59:09.877883] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:36.063 [2024-07-15 14:59:09.877891] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:36.063 [2024-07-15 14:59:09.888046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:36.063 qpair failed and we were unable to recover it. 00:24:36.063 [2024-07-15 14:59:09.897785] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:36.063 [2024-07-15 14:59:09.897825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:36.063 [2024-07-15 14:59:09.897839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:36.063 [2024-07-15 14:59:09.897846] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:36.063 [2024-07-15 14:59:09.897851] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:36.063 [2024-07-15 14:59:09.908222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:36.063 qpair failed and we were unable to recover it. 00:24:36.063 [2024-07-15 14:59:09.917687] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:36.063 [2024-07-15 14:59:09.917724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:36.063 [2024-07-15 14:59:09.917738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:36.063 [2024-07-15 14:59:09.917745] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:36.063 [2024-07-15 14:59:09.917750] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:36.063 [2024-07-15 14:59:09.928322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:36.063 qpair failed and we were unable to recover it. 00:24:36.063 [2024-07-15 14:59:09.937876] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:36.063 [2024-07-15 14:59:09.937914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:36.063 [2024-07-15 14:59:09.937927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:36.063 [2024-07-15 14:59:09.937934] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:36.063 [2024-07-15 14:59:09.937939] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:36.063 [2024-07-15 14:59:09.948331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:36.063 qpair failed and we were unable to recover it. 00:24:36.063 [2024-07-15 14:59:09.957853] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:36.063 [2024-07-15 14:59:09.957889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:36.063 [2024-07-15 14:59:09.957904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:36.063 [2024-07-15 14:59:09.957910] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:36.063 [2024-07-15 14:59:09.957916] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:36.063 [2024-07-15 14:59:09.968234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:36.063 qpair failed and we were unable to recover it. 00:24:36.063 [2024-07-15 14:59:09.978026] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:36.063 [2024-07-15 14:59:09.978060] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:36.063 [2024-07-15 14:59:09.978075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:36.063 [2024-07-15 14:59:09.978082] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:36.063 [2024-07-15 14:59:09.978088] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:36.321 [2024-07-15 14:59:09.988302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:36.321 qpair failed and we were unable to recover it. 00:24:36.321 [2024-07-15 14:59:09.998022] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:36.321 [2024-07-15 14:59:09.998062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:36.321 [2024-07-15 14:59:09.998077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:36.321 [2024-07-15 14:59:09.998084] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:36.321 [2024-07-15 14:59:09.998089] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:36.321 [2024-07-15 14:59:10.008636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:36.321 qpair failed and we were unable to recover it. 00:24:36.321 [2024-07-15 14:59:10.018061] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:36.321 [2024-07-15 14:59:10.018101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:36.321 [2024-07-15 14:59:10.018117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:36.321 [2024-07-15 14:59:10.018124] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:36.322 [2024-07-15 14:59:10.018131] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:36.322 [2024-07-15 14:59:10.028393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:36.322 qpair failed and we were unable to recover it. 00:24:36.322 [2024-07-15 14:59:10.038142] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:36.322 [2024-07-15 14:59:10.038183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:36.322 [2024-07-15 14:59:10.038202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:36.322 [2024-07-15 14:59:10.038209] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:36.322 [2024-07-15 14:59:10.038215] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:36.322 [2024-07-15 14:59:10.048525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:36.322 qpair failed and we were unable to recover it. 00:24:36.322 [2024-07-15 14:59:10.058082] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:36.322 [2024-07-15 14:59:10.058122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:36.322 [2024-07-15 14:59:10.058140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:36.322 [2024-07-15 14:59:10.058146] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:36.322 [2024-07-15 14:59:10.058152] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:36.322 [2024-07-15 14:59:10.068560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:36.322 qpair failed and we were unable to recover it. 00:24:36.322 [2024-07-15 14:59:10.078275] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:36.322 [2024-07-15 14:59:10.078311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:36.322 [2024-07-15 14:59:10.078329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:36.322 [2024-07-15 14:59:10.078336] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:36.322 [2024-07-15 14:59:10.078342] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:36.322 [2024-07-15 14:59:10.088609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:36.322 qpair failed and we were unable to recover it. 00:24:36.322 [2024-07-15 14:59:10.098214] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:36.322 [2024-07-15 14:59:10.098253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:36.322 [2024-07-15 14:59:10.098267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:36.322 [2024-07-15 14:59:10.098273] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:36.322 [2024-07-15 14:59:10.098279] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:36.322 [2024-07-15 14:59:10.108843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:36.322 qpair failed and we were unable to recover it. 00:24:36.322 [2024-07-15 14:59:10.118353] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:36.322 [2024-07-15 14:59:10.118394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:36.322 [2024-07-15 14:59:10.118408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:36.322 [2024-07-15 14:59:10.118415] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:36.322 [2024-07-15 14:59:10.118420] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:36.322 [2024-07-15 14:59:10.128926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:36.322 qpair failed and we were unable to recover it. 00:24:36.322 [2024-07-15 14:59:10.138462] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:36.322 [2024-07-15 14:59:10.138500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:36.322 [2024-07-15 14:59:10.138514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:36.322 [2024-07-15 14:59:10.138521] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:36.322 [2024-07-15 14:59:10.138527] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:36.322 [2024-07-15 14:59:10.148796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:36.322 qpair failed and we were unable to recover it. 00:24:36.322 [2024-07-15 14:59:10.158500] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:36.322 [2024-07-15 14:59:10.158533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:36.322 [2024-07-15 14:59:10.158551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:36.322 [2024-07-15 14:59:10.158558] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:36.322 [2024-07-15 14:59:10.158563] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:36.322 [2024-07-15 14:59:10.168980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:36.322 qpair failed and we were unable to recover it. 00:24:36.322 [2024-07-15 14:59:10.178651] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:36.322 [2024-07-15 14:59:10.178688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:36.322 [2024-07-15 14:59:10.178702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:36.322 [2024-07-15 14:59:10.178709] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:36.322 [2024-07-15 14:59:10.178714] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:36.322 [2024-07-15 14:59:10.188763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:36.322 qpair failed and we were unable to recover it. 00:24:36.322 [2024-07-15 14:59:10.198677] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:36.322 [2024-07-15 14:59:10.198722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:36.322 [2024-07-15 14:59:10.198736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:36.322 [2024-07-15 14:59:10.198742] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:36.322 [2024-07-15 14:59:10.198748] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:36.322 [2024-07-15 14:59:10.209014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:36.322 qpair failed and we were unable to recover it. 00:24:36.322 [2024-07-15 14:59:10.218761] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:36.322 [2024-07-15 14:59:10.218797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:36.322 [2024-07-15 14:59:10.218810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:36.322 [2024-07-15 14:59:10.218817] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:36.322 [2024-07-15 14:59:10.218823] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:36.322 [2024-07-15 14:59:10.229110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:36.322 qpair failed and we were unable to recover it. 00:24:36.322 [2024-07-15 14:59:10.238796] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:36.322 [2024-07-15 14:59:10.238830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:36.322 [2024-07-15 14:59:10.238848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:36.322 [2024-07-15 14:59:10.238855] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:36.322 [2024-07-15 14:59:10.238860] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:36.580 [2024-07-15 14:59:10.249634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:36.580 qpair failed and we were unable to recover it. 00:24:36.580 [2024-07-15 14:59:10.258841] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:36.580 [2024-07-15 14:59:10.258877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:36.580 [2024-07-15 14:59:10.258890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:36.580 [2024-07-15 14:59:10.258897] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:36.580 [2024-07-15 14:59:10.258902] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:36.580 [2024-07-15 14:59:10.269237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:36.580 qpair failed and we were unable to recover it. 00:24:36.580 [2024-07-15 14:59:10.278878] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:36.580 [2024-07-15 14:59:10.278920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:36.580 [2024-07-15 14:59:10.278934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:36.580 [2024-07-15 14:59:10.278940] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:36.580 [2024-07-15 14:59:10.278946] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:36.580 [2024-07-15 14:59:10.289405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:36.580 qpair failed and we were unable to recover it. 00:24:36.580 [2024-07-15 14:59:10.298942] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:36.580 [2024-07-15 14:59:10.298978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:36.580 [2024-07-15 14:59:10.298992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:36.580 [2024-07-15 14:59:10.298999] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:36.580 [2024-07-15 14:59:10.299004] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:36.580 [2024-07-15 14:59:10.309344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:36.580 qpair failed and we were unable to recover it. 00:24:36.580 [2024-07-15 14:59:10.318975] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:36.580 [2024-07-15 14:59:10.319007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:36.580 [2024-07-15 14:59:10.319021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:36.580 [2024-07-15 14:59:10.319028] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:36.580 [2024-07-15 14:59:10.319036] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:36.580 [2024-07-15 14:59:10.329528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:36.580 qpair failed and we were unable to recover it. 00:24:36.580 [2024-07-15 14:59:10.339032] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:36.580 [2024-07-15 14:59:10.339070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:36.580 [2024-07-15 14:59:10.339084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:36.580 [2024-07-15 14:59:10.339091] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:36.581 [2024-07-15 14:59:10.339096] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:36.581 [2024-07-15 14:59:10.349524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:36.581 qpair failed and we were unable to recover it. 00:24:36.581 [2024-07-15 14:59:10.359217] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:36.581 [2024-07-15 14:59:10.359254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:36.581 [2024-07-15 14:59:10.359268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:36.581 [2024-07-15 14:59:10.359275] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:36.581 [2024-07-15 14:59:10.359280] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:36.581 [2024-07-15 14:59:10.369621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:36.581 qpair failed and we were unable to recover it. 00:24:36.581 [2024-07-15 14:59:10.379167] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:36.581 [2024-07-15 14:59:10.379206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:36.581 [2024-07-15 14:59:10.379220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:36.581 [2024-07-15 14:59:10.379226] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:36.581 [2024-07-15 14:59:10.379232] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:36.581 [2024-07-15 14:59:10.389635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:36.581 qpair failed and we were unable to recover it. 00:24:36.581 [2024-07-15 14:59:10.399276] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:36.581 [2024-07-15 14:59:10.399313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:36.581 [2024-07-15 14:59:10.399328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:36.581 [2024-07-15 14:59:10.399334] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:36.581 [2024-07-15 14:59:10.399340] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:36.581 [2024-07-15 14:59:10.409741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:36.581 qpair failed and we were unable to recover it. 00:24:36.581 [2024-07-15 14:59:10.419213] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:36.581 [2024-07-15 14:59:10.419252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:36.581 [2024-07-15 14:59:10.419266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:36.581 [2024-07-15 14:59:10.419273] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:36.581 [2024-07-15 14:59:10.419279] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:36.581 [2024-07-15 14:59:10.429777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:36.581 qpair failed and we were unable to recover it. 00:24:36.581 [2024-07-15 14:59:10.439302] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:36.581 [2024-07-15 14:59:10.439340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:36.581 [2024-07-15 14:59:10.439354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:36.581 [2024-07-15 14:59:10.439361] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:36.581 [2024-07-15 14:59:10.439366] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:36.581 [2024-07-15 14:59:10.449662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:36.581 qpair failed and we were unable to recover it. 00:24:36.581 [2024-07-15 14:59:10.459408] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:36.581 [2024-07-15 14:59:10.459440] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:36.581 [2024-07-15 14:59:10.459454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:36.581 [2024-07-15 14:59:10.459460] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:36.581 [2024-07-15 14:59:10.459466] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:36.581 [2024-07-15 14:59:10.469850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:36.581 qpair failed and we were unable to recover it. 00:24:36.581 [2024-07-15 14:59:10.479525] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:36.581 [2024-07-15 14:59:10.479562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:36.581 [2024-07-15 14:59:10.479576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:36.581 [2024-07-15 14:59:10.479583] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:36.581 [2024-07-15 14:59:10.479589] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:36.581 [2024-07-15 14:59:10.489898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:36.581 qpair failed and we were unable to recover it. 00:24:36.581 [2024-07-15 14:59:10.499601] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:36.581 [2024-07-15 14:59:10.499655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:36.581 [2024-07-15 14:59:10.499673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:36.839 [2024-07-15 14:59:10.499681] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:36.839 [2024-07-15 14:59:10.499688] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:36.839 [2024-07-15 14:59:10.509951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:36.839 qpair failed and we were unable to recover it. 00:24:37.768 Write completed with error (sct=0, sc=8) 00:24:37.768 starting I/O failed 00:24:37.768 Write completed with error (sct=0, sc=8) 00:24:37.768 starting I/O failed 00:24:37.768 Write completed with error (sct=0, sc=8) 00:24:37.768 starting I/O failed 00:24:37.768 Read completed with error (sct=0, sc=8) 00:24:37.768 starting I/O failed 00:24:37.768 Write completed with error (sct=0, sc=8) 00:24:37.768 starting I/O failed 00:24:37.768 Read completed with error (sct=0, sc=8) 00:24:37.768 starting I/O failed 00:24:37.768 Read completed with error (sct=0, sc=8) 00:24:37.768 starting I/O failed 00:24:37.768 Write completed with error (sct=0, sc=8) 00:24:37.768 starting I/O failed 00:24:37.768 Write completed with error (sct=0, sc=8) 00:24:37.768 starting I/O failed 00:24:37.768 Read completed with error (sct=0, sc=8) 00:24:37.768 starting I/O failed 00:24:37.768 Read completed with error (sct=0, sc=8) 00:24:37.768 starting I/O failed 00:24:37.768 Write completed with error (sct=0, sc=8) 00:24:37.768 starting I/O failed 00:24:37.768 Read completed with error (sct=0, sc=8) 00:24:37.768 starting I/O failed 00:24:37.768 Write completed with error (sct=0, sc=8) 00:24:37.768 starting I/O failed 00:24:37.768 Read completed with error (sct=0, sc=8) 00:24:37.768 starting I/O failed 00:24:37.768 Write completed with error (sct=0, sc=8) 00:24:37.768 starting I/O failed 00:24:37.768 Write completed with error (sct=0, sc=8) 00:24:37.768 starting I/O failed 00:24:37.768 Write completed with error (sct=0, sc=8) 00:24:37.768 starting I/O failed 00:24:37.768 Write completed with error (sct=0, sc=8) 00:24:37.768 starting I/O failed 00:24:37.768 Write completed with error (sct=0, sc=8) 00:24:37.768 starting I/O failed 00:24:37.768 Read completed with error (sct=0, sc=8) 00:24:37.768 starting I/O failed 00:24:37.768 Write completed with error (sct=0, sc=8) 00:24:37.768 starting I/O failed 00:24:37.768 Write completed with error (sct=0, sc=8) 00:24:37.768 starting I/O failed 00:24:37.768 Read completed with error (sct=0, sc=8) 00:24:37.768 starting I/O failed 00:24:37.768 Write completed with error (sct=0, sc=8) 00:24:37.768 starting I/O failed 00:24:37.768 Write completed with error (sct=0, sc=8) 00:24:37.768 starting I/O failed 00:24:37.768 Read completed with error (sct=0, sc=8) 00:24:37.768 starting I/O failed 00:24:37.768 Read completed with error (sct=0, sc=8) 00:24:37.768 starting I/O failed 00:24:37.768 Read completed with error (sct=0, sc=8) 00:24:37.768 starting I/O failed 00:24:37.768 Read completed with error (sct=0, sc=8) 00:24:37.768 starting I/O failed 00:24:37.768 Write completed with error (sct=0, sc=8) 00:24:37.768 starting I/O failed 00:24:37.768 Write completed with error (sct=0, sc=8) 00:24:37.768 starting I/O failed 00:24:37.768 [2024-07-15 14:59:11.514878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:37.769 [2024-07-15 14:59:11.522351] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:37.769 [2024-07-15 14:59:11.522390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:37.769 [2024-07-15 14:59:11.522406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:37.769 [2024-07-15 14:59:11.522414] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:37.769 [2024-07-15 14:59:11.522420] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:24:37.769 [2024-07-15 14:59:11.533212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:37.769 qpair failed and we were unable to recover it. 00:24:37.769 [2024-07-15 14:59:11.542855] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:37.769 [2024-07-15 14:59:11.542889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:37.769 [2024-07-15 14:59:11.542905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:37.769 [2024-07-15 14:59:11.542915] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:37.769 [2024-07-15 14:59:11.542921] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:24:37.769 [2024-07-15 14:59:11.553235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:37.769 qpair failed and we were unable to recover it. 00:24:37.769 [2024-07-15 14:59:11.562682] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:37.769 [2024-07-15 14:59:11.562716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:37.769 [2024-07-15 14:59:11.562735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:37.769 [2024-07-15 14:59:11.562743] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:37.769 [2024-07-15 14:59:11.562750] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:24:37.769 [2024-07-15 14:59:11.573288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:37.769 qpair failed and we were unable to recover it. 00:24:37.769 [2024-07-15 14:59:11.582841] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:37.769 [2024-07-15 14:59:11.582880] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:37.769 [2024-07-15 14:59:11.582895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:37.769 [2024-07-15 14:59:11.582902] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:37.769 [2024-07-15 14:59:11.582907] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:24:37.769 [2024-07-15 14:59:11.593138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:37.769 qpair failed and we were unable to recover it. 00:24:37.769 [2024-07-15 14:59:11.593262] nvme_ctrlr.c:4476:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:24:37.769 A controller has encountered a failure and is being reset. 00:24:37.769 [2024-07-15 14:59:11.603029] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:37.769 [2024-07-15 14:59:11.603073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:37.769 [2024-07-15 14:59:11.603099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:37.769 [2024-07-15 14:59:11.603111] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:37.769 [2024-07-15 14:59:11.603121] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:37.769 [2024-07-15 14:59:11.613564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:37.769 qpair failed and we were unable to recover it. 00:24:37.769 [2024-07-15 14:59:11.622946] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:37.769 [2024-07-15 14:59:11.622982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:37.769 [2024-07-15 14:59:11.622998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:37.769 [2024-07-15 14:59:11.623005] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:37.769 [2024-07-15 14:59:11.623011] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:37.769 [2024-07-15 14:59:11.633350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:37.769 qpair failed and we were unable to recover it. 00:24:37.769 [2024-07-15 14:59:11.633494] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:24:37.769 [2024-07-15 14:59:11.665148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:37.769 Controller properly reset. 00:24:38.025 Initializing NVMe Controllers 00:24:38.025 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:24:38.026 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:24:38.026 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:24:38.026 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:24:38.026 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:24:38.026 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:24:38.026 Initialization complete. Launching workers. 00:24:38.026 Starting thread on core 1 00:24:38.026 Starting thread on core 2 00:24:38.026 Starting thread on core 3 00:24:38.026 Starting thread on core 0 00:24:38.026 14:59:11 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:24:38.026 00:24:38.026 real 0m12.554s 00:24:38.026 user 0m28.183s 00:24:38.026 sys 0m2.100s 00:24:38.026 14:59:11 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:38.026 14:59:11 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:38.026 ************************************ 00:24:38.026 END TEST nvmf_target_disconnect_tc2 00:24:38.026 ************************************ 00:24:38.026 14:59:11 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:24:38.026 14:59:11 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n 192.168.100.9 ']' 00:24:38.026 14:59:11 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@73 -- # run_test nvmf_target_disconnect_tc3 nvmf_target_disconnect_tc3 00:24:38.026 14:59:11 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:38.026 14:59:11 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:38.026 14:59:11 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:38.026 ************************************ 00:24:38.026 START TEST nvmf_target_disconnect_tc3 00:24:38.026 ************************************ 00:24:38.026 14:59:11 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc3 00:24:38.026 14:59:11 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@57 -- # reconnectpid=2963935 00:24:38.026 14:59:11 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@59 -- # sleep 2 00:24:38.026 14:59:11 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 alt_traddr:192.168.100.9' 00:24:38.026 EAL: No free 2048 kB hugepages reported on node 1 00:24:39.922 14:59:13 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@60 -- # kill -9 2962678 00:24:39.922 14:59:13 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@62 -- # sleep 2 00:24:41.317 Read completed with error (sct=0, sc=8) 00:24:41.317 starting I/O failed 00:24:41.317 Write completed with error (sct=0, sc=8) 00:24:41.317 starting I/O failed 00:24:41.317 Read completed with error (sct=0, sc=8) 00:24:41.317 starting I/O failed 00:24:41.317 Read completed with error (sct=0, sc=8) 00:24:41.317 starting I/O failed 00:24:41.317 Read completed with error (sct=0, sc=8) 00:24:41.317 starting I/O failed 00:24:41.317 Write completed with error (sct=0, sc=8) 00:24:41.317 starting I/O failed 00:24:41.317 Write completed with error (sct=0, sc=8) 00:24:41.317 starting I/O failed 00:24:41.317 Write completed with error (sct=0, sc=8) 00:24:41.317 starting I/O failed 00:24:41.317 Read completed with error (sct=0, sc=8) 00:24:41.317 starting I/O failed 00:24:41.317 Write completed with error (sct=0, sc=8) 00:24:41.317 starting I/O failed 00:24:41.317 Read completed with error (sct=0, sc=8) 00:24:41.317 starting I/O failed 00:24:41.317 Write completed with error (sct=0, sc=8) 00:24:41.317 starting I/O failed 00:24:41.317 Read completed with error (sct=0, sc=8) 00:24:41.317 starting I/O failed 00:24:41.317 Write completed with error (sct=0, sc=8) 00:24:41.317 starting I/O failed 00:24:41.317 Read completed with error (sct=0, sc=8) 00:24:41.317 starting I/O failed 00:24:41.317 Write completed with error (sct=0, sc=8) 00:24:41.317 starting I/O failed 00:24:41.317 Read completed with error (sct=0, sc=8) 00:24:41.317 starting I/O failed 00:24:41.317 Read completed with error (sct=0, sc=8) 00:24:41.317 starting I/O failed 00:24:41.317 Write completed with error (sct=0, sc=8) 00:24:41.317 starting I/O failed 00:24:41.317 Read completed with error (sct=0, sc=8) 00:24:41.317 starting I/O failed 00:24:41.317 Write completed with error (sct=0, sc=8) 00:24:41.317 starting I/O failed 00:24:41.317 Read completed with error (sct=0, sc=8) 00:24:41.317 starting I/O failed 00:24:41.317 Read completed with error (sct=0, sc=8) 00:24:41.317 starting I/O failed 00:24:41.317 Read completed with error (sct=0, sc=8) 00:24:41.317 starting I/O failed 00:24:41.317 Write completed with error (sct=0, sc=8) 00:24:41.317 starting I/O failed 00:24:41.317 Write completed with error (sct=0, sc=8) 00:24:41.317 starting I/O failed 00:24:41.317 Write completed with error (sct=0, sc=8) 00:24:41.317 starting I/O failed 00:24:41.317 Write completed with error (sct=0, sc=8) 00:24:41.317 starting I/O failed 00:24:41.317 Write completed with error (sct=0, sc=8) 00:24:41.317 starting I/O failed 00:24:41.317 Write completed with error (sct=0, sc=8) 00:24:41.317 starting I/O failed 00:24:41.317 Read completed with error (sct=0, sc=8) 00:24:41.317 starting I/O failed 00:24:41.317 Read completed with error (sct=0, sc=8) 00:24:41.317 starting I/O failed 00:24:41.317 [2024-07-15 14:59:14.959547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.883 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 54: 2962678 Killed "${NVMF_APP[@]}" "$@" 00:24:41.883 14:59:15 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@63 -- # disconnect_init 192.168.100.9 00:24:41.883 14:59:15 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:24:41.883 14:59:15 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:41.883 14:59:15 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:41.883 14:59:15 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:42.141 14:59:15 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@481 -- # nvmfpid=2964623 00:24:42.141 14:59:15 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@482 -- # waitforlisten 2964623 00:24:42.141 14:59:15 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:24:42.141 14:59:15 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@829 -- # '[' -z 2964623 ']' 00:24:42.141 14:59:15 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:42.141 14:59:15 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:42.141 14:59:15 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:42.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:42.141 14:59:15 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:42.141 14:59:15 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:42.141 [2024-07-15 14:59:15.852815] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:24:42.141 [2024-07-15 14:59:15.852865] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:42.141 EAL: No free 2048 kB hugepages reported on node 1 00:24:42.141 [2024-07-15 14:59:15.922011] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:42.141 Read completed with error (sct=0, sc=8) 00:24:42.141 starting I/O failed 00:24:42.141 Read completed with error (sct=0, sc=8) 00:24:42.141 starting I/O failed 00:24:42.141 Read completed with error (sct=0, sc=8) 00:24:42.141 starting I/O failed 00:24:42.141 Write completed with error (sct=0, sc=8) 00:24:42.141 starting I/O failed 00:24:42.141 Read completed with error (sct=0, sc=8) 00:24:42.141 starting I/O failed 00:24:42.141 Read completed with error (sct=0, sc=8) 00:24:42.141 starting I/O failed 00:24:42.141 Write completed with error (sct=0, sc=8) 00:24:42.141 starting I/O failed 00:24:42.141 Write completed with error (sct=0, sc=8) 00:24:42.141 starting I/O failed 00:24:42.141 Read completed with error (sct=0, sc=8) 00:24:42.141 starting I/O failed 00:24:42.141 Write completed with error (sct=0, sc=8) 00:24:42.141 starting I/O failed 00:24:42.141 Write completed with error (sct=0, sc=8) 00:24:42.141 starting I/O failed 00:24:42.141 Read completed with error (sct=0, sc=8) 00:24:42.141 starting I/O failed 00:24:42.141 Write completed with error (sct=0, sc=8) 00:24:42.141 starting I/O failed 00:24:42.141 Write completed with error (sct=0, sc=8) 00:24:42.141 starting I/O failed 00:24:42.141 Write completed with error (sct=0, sc=8) 00:24:42.141 starting I/O failed 00:24:42.141 Read completed with error (sct=0, sc=8) 00:24:42.141 starting I/O failed 00:24:42.141 Write completed with error (sct=0, sc=8) 00:24:42.141 starting I/O failed 00:24:42.141 Read completed with error (sct=0, sc=8) 00:24:42.141 starting I/O failed 00:24:42.141 Read completed with error (sct=0, sc=8) 00:24:42.141 starting I/O failed 00:24:42.141 Write completed with error (sct=0, sc=8) 00:24:42.141 starting I/O failed 00:24:42.141 Read completed with error (sct=0, sc=8) 00:24:42.141 starting I/O failed 00:24:42.141 Write completed with error (sct=0, sc=8) 00:24:42.141 starting I/O failed 00:24:42.141 Write completed with error (sct=0, sc=8) 00:24:42.141 starting I/O failed 00:24:42.141 Read completed with error (sct=0, sc=8) 00:24:42.141 starting I/O failed 00:24:42.141 Read completed with error (sct=0, sc=8) 00:24:42.141 starting I/O failed 00:24:42.141 Write completed with error (sct=0, sc=8) 00:24:42.141 starting I/O failed 00:24:42.141 Read completed with error (sct=0, sc=8) 00:24:42.141 starting I/O failed 00:24:42.141 Write completed with error (sct=0, sc=8) 00:24:42.141 starting I/O failed 00:24:42.141 Read completed with error (sct=0, sc=8) 00:24:42.141 starting I/O failed 00:24:42.141 Read completed with error (sct=0, sc=8) 00:24:42.141 starting I/O failed 00:24:42.141 Write completed with error (sct=0, sc=8) 00:24:42.141 starting I/O failed 00:24:42.141 Write completed with error (sct=0, sc=8) 00:24:42.141 starting I/O failed 00:24:42.141 [2024-07-15 14:59:15.964521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:42.141 [2024-07-15 14:59:15.998655] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:42.141 [2024-07-15 14:59:15.998686] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:42.141 [2024-07-15 14:59:15.998692] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:42.141 [2024-07-15 14:59:15.998698] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:42.141 [2024-07-15 14:59:15.998703] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:42.141 [2024-07-15 14:59:15.998830] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:24:42.141 [2024-07-15 14:59:15.998941] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:24:42.141 [2024-07-15 14:59:15.999044] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:24:42.141 [2024-07-15 14:59:15.999046] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:24:43.074 14:59:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:43.074 14:59:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@862 -- # return 0 00:24:43.074 14:59:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:43.074 14:59:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:43.074 14:59:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:43.074 14:59:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:43.074 14:59:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:43.074 14:59:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.075 14:59:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:43.075 Malloc0 00:24:43.075 14:59:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.075 14:59:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:24:43.075 14:59:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.075 14:59:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:43.075 [2024-07-15 14:59:16.741176] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x23e2cf0/0x23ee8c0) succeed. 00:24:43.075 [2024-07-15 14:59:16.750514] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x23e4330/0x242ff50) succeed. 00:24:43.075 14:59:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.075 14:59:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:43.075 14:59:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.075 14:59:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:43.075 14:59:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.075 14:59:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:43.075 14:59:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.075 14:59:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:43.075 14:59:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.075 14:59:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.9 -s 4420 00:24:43.075 14:59:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.075 14:59:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:43.075 [2024-07-15 14:59:16.893681] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:24:43.075 14:59:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.075 14:59:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.9 -s 4420 00:24:43.075 14:59:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.075 14:59:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:43.075 14:59:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.075 14:59:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@65 -- # wait 2963935 00:24:43.075 Read completed with error (sct=0, sc=8) 00:24:43.075 starting I/O failed 00:24:43.075 Read completed with error (sct=0, sc=8) 00:24:43.075 starting I/O failed 00:24:43.075 Read completed with error (sct=0, sc=8) 00:24:43.075 starting I/O failed 00:24:43.075 Read completed with error (sct=0, sc=8) 00:24:43.075 starting I/O failed 00:24:43.075 Write completed with error (sct=0, sc=8) 00:24:43.075 starting I/O failed 00:24:43.075 Read completed with error (sct=0, sc=8) 00:24:43.075 starting I/O failed 00:24:43.075 Read completed with error (sct=0, sc=8) 00:24:43.075 starting I/O failed 00:24:43.075 Read completed with error (sct=0, sc=8) 00:24:43.075 starting I/O failed 00:24:43.075 Write completed with error (sct=0, sc=8) 00:24:43.075 starting I/O failed 00:24:43.075 Read completed with error (sct=0, sc=8) 00:24:43.075 starting I/O failed 00:24:43.075 Write completed with error (sct=0, sc=8) 00:24:43.075 starting I/O failed 00:24:43.075 Write completed with error (sct=0, sc=8) 00:24:43.075 starting I/O failed 00:24:43.075 Read completed with error (sct=0, sc=8) 00:24:43.075 starting I/O failed 00:24:43.075 Write completed with error (sct=0, sc=8) 00:24:43.075 starting I/O failed 00:24:43.075 Write completed with error (sct=0, sc=8) 00:24:43.075 starting I/O failed 00:24:43.075 Write completed with error (sct=0, sc=8) 00:24:43.075 starting I/O failed 00:24:43.075 Write completed with error (sct=0, sc=8) 00:24:43.075 starting I/O failed 00:24:43.075 Write completed with error (sct=0, sc=8) 00:24:43.075 starting I/O failed 00:24:43.075 Read completed with error (sct=0, sc=8) 00:24:43.075 starting I/O failed 00:24:43.075 Read completed with error (sct=0, sc=8) 00:24:43.075 starting I/O failed 00:24:43.075 Write completed with error (sct=0, sc=8) 00:24:43.075 starting I/O failed 00:24:43.075 Read completed with error (sct=0, sc=8) 00:24:43.075 starting I/O failed 00:24:43.075 Read completed with error (sct=0, sc=8) 00:24:43.075 starting I/O failed 00:24:43.075 Read completed with error (sct=0, sc=8) 00:24:43.075 starting I/O failed 00:24:43.075 Read completed with error (sct=0, sc=8) 00:24:43.075 starting I/O failed 00:24:43.075 Read completed with error (sct=0, sc=8) 00:24:43.075 starting I/O failed 00:24:43.075 Write completed with error (sct=0, sc=8) 00:24:43.075 starting I/O failed 00:24:43.075 Read completed with error (sct=0, sc=8) 00:24:43.075 starting I/O failed 00:24:43.075 Read completed with error (sct=0, sc=8) 00:24:43.075 starting I/O failed 00:24:43.075 Read completed with error (sct=0, sc=8) 00:24:43.075 starting I/O failed 00:24:43.075 Write completed with error (sct=0, sc=8) 00:24:43.075 starting I/O failed 00:24:43.075 Read completed with error (sct=0, sc=8) 00:24:43.075 starting I/O failed 00:24:43.075 [2024-07-15 14:59:16.969515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:43.075 [2024-07-15 14:59:16.971243] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:43.075 [2024-07-15 14:59:16.971261] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:43.075 [2024-07-15 14:59:16.971267] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:44.443 [2024-07-15 14:59:17.975141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:44.443 qpair failed and we were unable to recover it. 00:24:44.443 [2024-07-15 14:59:17.976645] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:44.443 [2024-07-15 14:59:17.976659] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:44.443 [2024-07-15 14:59:17.976665] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:45.373 [2024-07-15 14:59:18.980495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:45.373 qpair failed and we were unable to recover it. 00:24:45.373 [2024-07-15 14:59:18.982016] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:45.373 [2024-07-15 14:59:18.982030] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:45.373 [2024-07-15 14:59:18.982036] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:46.303 [2024-07-15 14:59:19.985966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:46.303 qpair failed and we were unable to recover it. 00:24:46.303 [2024-07-15 14:59:19.987344] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:46.303 [2024-07-15 14:59:19.987359] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:46.303 [2024-07-15 14:59:19.987364] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:47.232 [2024-07-15 14:59:20.991274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:47.232 qpair failed and we were unable to recover it. 00:24:47.232 [2024-07-15 14:59:20.992690] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:47.232 [2024-07-15 14:59:20.992705] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:47.232 [2024-07-15 14:59:20.992711] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:48.156 [2024-07-15 14:59:21.996559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:48.156 qpair failed and we were unable to recover it. 00:24:48.156 [2024-07-15 14:59:21.997996] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:48.156 [2024-07-15 14:59:21.998011] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:48.156 [2024-07-15 14:59:21.998017] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:49.082 [2024-07-15 14:59:23.001762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:49.082 qpair failed and we were unable to recover it. 00:24:49.336 [2024-07-15 14:59:23.003206] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:49.336 [2024-07-15 14:59:23.003220] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:49.337 [2024-07-15 14:59:23.003226] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:50.263 [2024-07-15 14:59:24.007069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:50.263 qpair failed and we were unable to recover it. 00:24:50.263 [2024-07-15 14:59:24.008633] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:50.263 [2024-07-15 14:59:24.008655] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:50.263 [2024-07-15 14:59:24.008662] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:24:51.191 [2024-07-15 14:59:25.012545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:51.191 qpair failed and we were unable to recover it. 00:24:51.191 [2024-07-15 14:59:25.014066] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:51.191 [2024-07-15 14:59:25.014080] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:51.191 [2024-07-15 14:59:25.014086] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:24:52.121 [2024-07-15 14:59:26.017973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:52.121 qpair failed and we were unable to recover it. 00:24:52.121 [2024-07-15 14:59:26.018094] nvme_ctrlr.c:4476:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:24:52.121 A controller has encountered a failure and is being reset. 00:24:52.121 Resorting to new failover address 192.168.100.9 00:24:52.121 [2024-07-15 14:59:26.019887] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:52.121 [2024-07-15 14:59:26.019915] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:52.121 [2024-07-15 14:59:26.019926] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:53.491 [2024-07-15 14:59:27.023572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:53.491 qpair failed and we were unable to recover it. 00:24:53.491 [2024-07-15 14:59:27.024921] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:53.491 [2024-07-15 14:59:27.024935] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:53.491 [2024-07-15 14:59:27.024941] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:54.422 [2024-07-15 14:59:28.028818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:54.422 qpair failed and we were unable to recover it. 00:24:54.422 [2024-07-15 14:59:28.028937] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:54.422 [2024-07-15 14:59:28.029034] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:24:54.422 [2024-07-15 14:59:28.030942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:54.422 Controller properly reset. 00:24:55.353 Write completed with error (sct=0, sc=8) 00:24:55.353 starting I/O failed 00:24:55.353 Read completed with error (sct=0, sc=8) 00:24:55.353 starting I/O failed 00:24:55.353 Write completed with error (sct=0, sc=8) 00:24:55.353 starting I/O failed 00:24:55.353 Write completed with error (sct=0, sc=8) 00:24:55.353 starting I/O failed 00:24:55.353 Read completed with error (sct=0, sc=8) 00:24:55.353 starting I/O failed 00:24:55.353 Write completed with error (sct=0, sc=8) 00:24:55.353 starting I/O failed 00:24:55.353 Write completed with error (sct=0, sc=8) 00:24:55.353 starting I/O failed 00:24:55.353 Write completed with error (sct=0, sc=8) 00:24:55.353 starting I/O failed 00:24:55.353 Read completed with error (sct=0, sc=8) 00:24:55.353 starting I/O failed 00:24:55.353 Read completed with error (sct=0, sc=8) 00:24:55.353 starting I/O failed 00:24:55.353 Read completed with error (sct=0, sc=8) 00:24:55.353 starting I/O failed 00:24:55.353 Write completed with error (sct=0, sc=8) 00:24:55.353 starting I/O failed 00:24:55.353 Read completed with error (sct=0, sc=8) 00:24:55.353 starting I/O failed 00:24:55.353 Write completed with error (sct=0, sc=8) 00:24:55.353 starting I/O failed 00:24:55.353 Write completed with error (sct=0, sc=8) 00:24:55.353 starting I/O failed 00:24:55.353 Write completed with error (sct=0, sc=8) 00:24:55.353 starting I/O failed 00:24:55.353 Write completed with error (sct=0, sc=8) 00:24:55.353 starting I/O failed 00:24:55.353 Read completed with error (sct=0, sc=8) 00:24:55.353 starting I/O failed 00:24:55.353 Write completed with error (sct=0, sc=8) 00:24:55.353 starting I/O failed 00:24:55.353 Write completed with error (sct=0, sc=8) 00:24:55.353 starting I/O failed 00:24:55.353 Write completed with error (sct=0, sc=8) 00:24:55.353 starting I/O failed 00:24:55.353 Read completed with error (sct=0, sc=8) 00:24:55.353 starting I/O failed 00:24:55.353 Write completed with error (sct=0, sc=8) 00:24:55.353 starting I/O failed 00:24:55.353 Read completed with error (sct=0, sc=8) 00:24:55.353 starting I/O failed 00:24:55.353 Write completed with error (sct=0, sc=8) 00:24:55.353 starting I/O failed 00:24:55.353 Write completed with error (sct=0, sc=8) 00:24:55.353 starting I/O failed 00:24:55.353 Read completed with error (sct=0, sc=8) 00:24:55.353 starting I/O failed 00:24:55.353 Write completed with error (sct=0, sc=8) 00:24:55.353 starting I/O failed 00:24:55.353 Read completed with error (sct=0, sc=8) 00:24:55.353 starting I/O failed 00:24:55.353 Read completed with error (sct=0, sc=8) 00:24:55.353 starting I/O failed 00:24:55.353 Write completed with error (sct=0, sc=8) 00:24:55.353 starting I/O failed 00:24:55.353 Read completed with error (sct=0, sc=8) 00:24:55.353 starting I/O failed 00:24:55.353 [2024-07-15 14:59:29.076532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:55.353 Initializing NVMe Controllers 00:24:55.353 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:24:55.353 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:24:55.353 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:24:55.353 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:24:55.353 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:24:55.353 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:24:55.353 Initialization complete. Launching workers. 00:24:55.353 Starting thread on core 1 00:24:55.353 Starting thread on core 2 00:24:55.353 Starting thread on core 3 00:24:55.353 Starting thread on core 0 00:24:55.353 14:59:29 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@66 -- # sync 00:24:55.353 00:24:55.353 real 0m17.333s 00:24:55.353 user 1m1.550s 00:24:55.353 sys 0m3.553s 00:24:55.353 14:59:29 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:55.353 14:59:29 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:55.353 ************************************ 00:24:55.353 END TEST nvmf_target_disconnect_tc3 00:24:55.353 ************************************ 00:24:55.353 14:59:29 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:24:55.353 14:59:29 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:24:55.353 14:59:29 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:24:55.353 14:59:29 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:55.353 14:59:29 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:24:55.353 14:59:29 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:24:55.353 14:59:29 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:24:55.353 14:59:29 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:24:55.353 14:59:29 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:55.353 14:59:29 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:24:55.353 rmmod nvme_rdma 00:24:55.353 rmmod nvme_fabrics 00:24:55.353 14:59:29 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:55.353 14:59:29 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:24:55.353 14:59:29 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:24:55.353 14:59:29 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 2964623 ']' 00:24:55.353 14:59:29 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 2964623 00:24:55.353 14:59:29 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 2964623 ']' 00:24:55.353 14:59:29 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 2964623 00:24:55.353 14:59:29 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:24:55.353 14:59:29 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:55.353 14:59:29 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2964623 00:24:55.353 14:59:29 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:24:55.353 14:59:29 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:24:55.353 14:59:29 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2964623' 00:24:55.353 killing process with pid 2964623 00:24:55.353 14:59:29 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 2964623 00:24:55.353 14:59:29 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 2964623 00:24:55.919 14:59:29 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:55.919 14:59:29 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:24:55.919 00:24:55.919 real 0m36.594s 00:24:55.919 user 2m25.648s 00:24:55.919 sys 0m9.975s 00:24:55.919 14:59:29 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:55.919 14:59:29 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:55.919 ************************************ 00:24:55.919 END TEST nvmf_target_disconnect 00:24:55.919 ************************************ 00:24:55.919 14:59:29 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:24:55.919 14:59:29 nvmf_rdma -- nvmf/nvmf.sh@126 -- # timing_exit host 00:24:55.919 14:59:29 nvmf_rdma -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:55.919 14:59:29 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:24:55.919 14:59:29 nvmf_rdma -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:24:55.919 00:24:55.919 real 17m15.862s 00:24:55.919 user 43m23.186s 00:24:55.919 sys 4m11.888s 00:24:55.919 14:59:29 nvmf_rdma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:55.919 14:59:29 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:24:55.919 ************************************ 00:24:55.919 END TEST nvmf_rdma 00:24:55.919 ************************************ 00:24:55.919 14:59:29 -- common/autotest_common.sh@1142 -- # return 0 00:24:55.919 14:59:29 -- spdk/autotest.sh@285 -- # run_test spdkcli_nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:24:55.919 14:59:29 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:55.919 14:59:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:55.919 14:59:29 -- common/autotest_common.sh@10 -- # set +x 00:24:55.919 ************************************ 00:24:55.919 START TEST spdkcli_nvmf_rdma 00:24:55.919 ************************************ 00:24:55.919 14:59:29 spdkcli_nvmf_rdma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:24:55.919 * Looking for test storage... 00:24:55.919 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:24:55.919 14:59:29 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:24:55.919 14:59:29 spdkcli_nvmf_rdma -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:24:55.919 14:59:29 spdkcli_nvmf_rdma -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:24:55.919 14:59:29 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:55.919 14:59:29 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # uname -s 00:24:55.919 14:59:29 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:55.919 14:59:29 spdkcli_nvmf_rdma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:55.919 14:59:29 spdkcli_nvmf_rdma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:55.919 14:59:29 spdkcli_nvmf_rdma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:55.919 14:59:29 spdkcli_nvmf_rdma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:55.919 14:59:29 spdkcli_nvmf_rdma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:55.919 14:59:29 spdkcli_nvmf_rdma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:55.919 14:59:29 spdkcli_nvmf_rdma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:55.919 14:59:29 spdkcli_nvmf_rdma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:55.919 14:59:29 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:55.919 14:59:29 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:24:55.919 14:59:29 spdkcli_nvmf_rdma -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:24:55.919 14:59:29 spdkcli_nvmf_rdma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:55.919 14:59:29 spdkcli_nvmf_rdma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:55.919 14:59:29 spdkcli_nvmf_rdma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:55.919 14:59:29 spdkcli_nvmf_rdma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:55.919 14:59:29 spdkcli_nvmf_rdma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:55.919 14:59:29 spdkcli_nvmf_rdma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:55.919 14:59:29 spdkcli_nvmf_rdma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:55.919 14:59:29 spdkcli_nvmf_rdma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:55.919 14:59:29 spdkcli_nvmf_rdma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.919 14:59:29 spdkcli_nvmf_rdma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.920 14:59:29 spdkcli_nvmf_rdma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.920 14:59:29 spdkcli_nvmf_rdma -- paths/export.sh@5 -- # export PATH 00:24:55.920 14:59:29 spdkcli_nvmf_rdma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.920 14:59:29 spdkcli_nvmf_rdma -- nvmf/common.sh@47 -- # : 0 00:24:55.920 14:59:29 spdkcli_nvmf_rdma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:55.920 14:59:29 spdkcli_nvmf_rdma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:55.920 14:59:29 spdkcli_nvmf_rdma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:55.920 14:59:29 spdkcli_nvmf_rdma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:55.920 14:59:29 spdkcli_nvmf_rdma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:55.920 14:59:29 spdkcli_nvmf_rdma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:55.920 14:59:29 spdkcli_nvmf_rdma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:55.920 14:59:29 spdkcli_nvmf_rdma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:55.920 14:59:29 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:24:55.920 14:59:29 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:24:55.920 14:59:29 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:24:55.920 14:59:29 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:24:55.920 14:59:29 spdkcli_nvmf_rdma -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:55.920 14:59:29 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:24:55.920 14:59:29 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:24:55.920 14:59:29 spdkcli_nvmf_rdma -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2966965 00:24:55.920 14:59:29 spdkcli_nvmf_rdma -- spdkcli/common.sh@34 -- # waitforlisten 2966965 00:24:55.920 14:59:29 spdkcli_nvmf_rdma -- common/autotest_common.sh@829 -- # '[' -z 2966965 ']' 00:24:55.920 14:59:29 spdkcli_nvmf_rdma -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:55.920 14:59:29 spdkcli_nvmf_rdma -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:24:55.920 14:59:29 spdkcli_nvmf_rdma -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:55.920 14:59:29 spdkcli_nvmf_rdma -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:55.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:55.920 14:59:29 spdkcli_nvmf_rdma -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:55.920 14:59:29 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:24:56.179 [2024-07-15 14:59:29.842600] Starting SPDK v24.09-pre git sha1 bd4841ef7 / DPDK 24.03.0 initialization... 00:24:56.179 [2024-07-15 14:59:29.842646] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2966965 ] 00:24:56.179 EAL: No free 2048 kB hugepages reported on node 1 00:24:56.179 [2024-07-15 14:59:29.898524] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:56.179 [2024-07-15 14:59:29.972148] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:56.179 [2024-07-15 14:59:29.972152] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:56.746 14:59:30 spdkcli_nvmf_rdma -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:56.746 14:59:30 spdkcli_nvmf_rdma -- common/autotest_common.sh@862 -- # return 0 00:24:56.746 14:59:30 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:24:56.746 14:59:30 spdkcli_nvmf_rdma -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:56.746 14:59:30 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:24:56.746 14:59:30 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:24:56.746 14:59:30 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@22 -- # [[ rdma == \r\d\m\a ]] 00:24:56.746 14:59:30 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@23 -- # nvmftestinit 00:24:56.746 14:59:30 spdkcli_nvmf_rdma -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:24:56.746 14:59:30 spdkcli_nvmf_rdma -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:56.746 14:59:30 spdkcli_nvmf_rdma -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:56.746 14:59:30 spdkcli_nvmf_rdma -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:56.746 14:59:30 spdkcli_nvmf_rdma -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:56.746 14:59:30 spdkcli_nvmf_rdma -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:56.746 14:59:30 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:56.746 14:59:30 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:57.005 14:59:30 spdkcli_nvmf_rdma -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:57.005 14:59:30 spdkcli_nvmf_rdma -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:57.005 14:59:30 spdkcli_nvmf_rdma -- nvmf/common.sh@285 -- # xtrace_disable 00:24:57.005 14:59:30 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:25:02.272 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:02.272 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@291 -- # pci_devs=() 00:25:02.272 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:02.272 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:02.272 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:02.272 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:02.272 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:02.272 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@295 -- # net_devs=() 00:25:02.272 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:02.272 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@296 -- # e810=() 00:25:02.272 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@296 -- # local -ga e810 00:25:02.272 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@297 -- # x722=() 00:25:02.272 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@297 -- # local -ga x722 00:25:02.272 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@298 -- # mlx=() 00:25:02.272 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@298 -- # local -ga mlx 00:25:02.272 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:02.272 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:02.272 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:02.272 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:02.272 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:02.272 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:02.272 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:02.272 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:02.272 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:02.272 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:02.272 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:02.272 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:02.272 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:25:02.272 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:25:02.272 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:25:02.272 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:25:02.272 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:25:02.272 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:02.272 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:02.272 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:25:02.272 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:25:02.272 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:25:02.272 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:25:02.272 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:02.272 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:02.272 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:25:02.272 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:25:02.272 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:25:02.273 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:25:02.273 Found net devices under 0000:da:00.0: mlx_0_0 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:25:02.273 Found net devices under 0000:da:00.1: mlx_0_1 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@414 -- # is_hw=yes 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@420 -- # rdma_device_init 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # uname 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@62 -- # modprobe ib_cm 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@63 -- # modprobe ib_core 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@64 -- # modprobe ib_umad 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@66 -- # modprobe iw_cm 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@502 -- # allocate_nic_ips 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@73 -- # get_rdma_if_list 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@104 -- # echo mlx_0_0 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # continue 2 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@104 -- # echo mlx_0_1 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # continue 2 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:25:02.273 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:02.273 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:25:02.273 altname enp218s0f0np0 00:25:02.273 altname ens818f0np0 00:25:02.273 inet 192.168.100.8/24 scope global mlx_0_0 00:25:02.273 valid_lft forever preferred_lft forever 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:25:02.273 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:02.273 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:25:02.273 altname enp218s0f1np1 00:25:02.273 altname ens818f1np1 00:25:02.273 inet 192.168.100.9/24 scope global mlx_0_1 00:25:02.273 valid_lft forever preferred_lft forever 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@422 -- # return 0 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@86 -- # get_rdma_if_list 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@104 -- # echo mlx_0_0 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # continue 2 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:02.273 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:02.274 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:02.274 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:02.274 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:02.274 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@104 -- # echo mlx_0_1 00:25:02.274 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # continue 2 00:25:02.274 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:25:02.274 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:25:02.274 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:25:02.274 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:25:02.274 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:02.274 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:02.274 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:25:02.274 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:25:02.274 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:25:02.274 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:25:02.274 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:02.274 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:02.274 14:59:35 spdkcli_nvmf_rdma -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:25:02.274 192.168.100.9' 00:25:02.274 14:59:36 spdkcli_nvmf_rdma -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:25:02.274 192.168.100.9' 00:25:02.274 14:59:36 spdkcli_nvmf_rdma -- nvmf/common.sh@457 -- # head -n 1 00:25:02.274 14:59:36 spdkcli_nvmf_rdma -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:25:02.274 14:59:36 spdkcli_nvmf_rdma -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:25:02.274 192.168.100.9' 00:25:02.274 14:59:36 spdkcli_nvmf_rdma -- nvmf/common.sh@458 -- # tail -n +2 00:25:02.274 14:59:36 spdkcli_nvmf_rdma -- nvmf/common.sh@458 -- # head -n 1 00:25:02.274 14:59:36 spdkcli_nvmf_rdma -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:25:02.274 14:59:36 spdkcli_nvmf_rdma -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:25:02.274 14:59:36 spdkcli_nvmf_rdma -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:25:02.274 14:59:36 spdkcli_nvmf_rdma -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:25:02.274 14:59:36 spdkcli_nvmf_rdma -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:25:02.274 14:59:36 spdkcli_nvmf_rdma -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:25:02.274 14:59:36 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@24 -- # NVMF_TARGET_IP=192.168.100.8 00:25:02.274 14:59:36 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:25:02.274 14:59:36 spdkcli_nvmf_rdma -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:02.274 14:59:36 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:25:02.274 14:59:36 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:25:02.274 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:25:02.274 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:25:02.274 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:25:02.274 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:25:02.274 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:25:02.274 '\''nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:25:02.274 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:25:02.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:25:02.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:25:02.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:25:02.274 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:02.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:25:02.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:25:02.274 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:02.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:25:02.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:25:02.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:25:02.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:25:02.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:02.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:25:02.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:25:02.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:25:02.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4'\'' '\''192.168.100.8:4262'\'' True 00:25:02.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:02.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:25:02.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:25:02.274 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:25:02.274 ' 00:25:04.809 [2024-07-15 14:59:38.425770] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xa76cf0/0x8fd600) succeed. 00:25:04.809 [2024-07-15 14:59:38.435350] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xa78380/0x9e86c0) succeed. 00:25:05.746 [2024-07-15 14:59:39.664718] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4260 *** 00:25:08.343 [2024-07-15 14:59:41.827672] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4261 *** 00:25:10.240 [2024-07-15 14:59:43.685814] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4262 *** 00:25:11.612 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:25:11.613 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:25:11.613 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:25:11.613 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:25:11.613 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:25:11.613 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:25:11.613 Executing command: ['nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:25:11.613 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:11.613 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:25:11.613 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:25:11.613 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:25:11.613 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:11.613 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:25:11.613 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:25:11.613 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:11.613 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:25:11.613 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:25:11.613 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:25:11.613 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:11.613 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:11.613 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:25:11.613 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:25:11.613 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:25:11.613 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4', '192.168.100.8:4262', True] 00:25:11.613 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:11.613 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:25:11.613 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:25:11.613 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:25:11.613 14:59:45 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:25:11.613 14:59:45 spdkcli_nvmf_rdma -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:11.613 14:59:45 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:25:11.613 14:59:45 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:25:11.613 14:59:45 spdkcli_nvmf_rdma -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:11.613 14:59:45 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:25:11.613 14:59:45 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@69 -- # check_match 00:25:11.613 14:59:45 spdkcli_nvmf_rdma -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:25:11.871 14:59:45 spdkcli_nvmf_rdma -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:25:11.871 14:59:45 spdkcli_nvmf_rdma -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:25:11.871 14:59:45 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:25:11.871 14:59:45 spdkcli_nvmf_rdma -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:11.871 14:59:45 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:25:11.871 14:59:45 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:25:11.871 14:59:45 spdkcli_nvmf_rdma -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:11.871 14:59:45 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:25:11.871 14:59:45 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:25:11.871 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:25:11.871 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:11.871 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:25:11.871 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262'\'' '\''192.168.100.8:4262'\'' 00:25:11.871 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''192.168.100.8:4261'\'' 00:25:11.871 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:25:11.871 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:11.871 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:25:11.871 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:25:11.871 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:25:11.871 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:25:11.871 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:25:11.871 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:25:11.871 ' 00:25:17.137 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:25:17.137 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:25:17.137 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:17.137 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:25:17.137 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262', '192.168.100.8:4262', False] 00:25:17.137 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '192.168.100.8:4261', False] 00:25:17.137 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:25:17.137 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:17.137 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:25:17.137 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:25:17.137 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:25:17.137 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:25:17.137 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:25:17.138 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:25:17.138 14:59:50 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:25:17.138 14:59:50 spdkcli_nvmf_rdma -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:17.138 14:59:50 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:25:17.138 14:59:50 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@90 -- # killprocess 2966965 00:25:17.138 14:59:50 spdkcli_nvmf_rdma -- common/autotest_common.sh@948 -- # '[' -z 2966965 ']' 00:25:17.138 14:59:50 spdkcli_nvmf_rdma -- common/autotest_common.sh@952 -- # kill -0 2966965 00:25:17.138 14:59:50 spdkcli_nvmf_rdma -- common/autotest_common.sh@953 -- # uname 00:25:17.138 14:59:50 spdkcli_nvmf_rdma -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:17.138 14:59:50 spdkcli_nvmf_rdma -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2966965 00:25:17.138 14:59:50 spdkcli_nvmf_rdma -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:17.138 14:59:50 spdkcli_nvmf_rdma -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:17.138 14:59:50 spdkcli_nvmf_rdma -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2966965' 00:25:17.138 killing process with pid 2966965 00:25:17.138 14:59:50 spdkcli_nvmf_rdma -- common/autotest_common.sh@967 -- # kill 2966965 00:25:17.138 14:59:50 spdkcli_nvmf_rdma -- common/autotest_common.sh@972 -- # wait 2966965 00:25:17.138 14:59:50 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@1 -- # nvmftestfini 00:25:17.138 14:59:50 spdkcli_nvmf_rdma -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:17.138 14:59:50 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # sync 00:25:17.138 14:59:50 spdkcli_nvmf_rdma -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:25:17.138 14:59:50 spdkcli_nvmf_rdma -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:25:17.138 14:59:50 spdkcli_nvmf_rdma -- nvmf/common.sh@120 -- # set +e 00:25:17.138 14:59:50 spdkcli_nvmf_rdma -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:17.138 14:59:50 spdkcli_nvmf_rdma -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:25:17.138 rmmod nvme_rdma 00:25:17.138 rmmod nvme_fabrics 00:25:17.138 14:59:50 spdkcli_nvmf_rdma -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:17.138 14:59:50 spdkcli_nvmf_rdma -- nvmf/common.sh@124 -- # set -e 00:25:17.138 14:59:50 spdkcli_nvmf_rdma -- nvmf/common.sh@125 -- # return 0 00:25:17.138 14:59:50 spdkcli_nvmf_rdma -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:25:17.138 14:59:50 spdkcli_nvmf_rdma -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:17.138 14:59:50 spdkcli_nvmf_rdma -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:25:17.138 00:25:17.138 real 0m21.306s 00:25:17.138 user 0m45.004s 00:25:17.138 sys 0m4.791s 00:25:17.138 14:59:50 spdkcli_nvmf_rdma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:17.138 14:59:50 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:25:17.138 ************************************ 00:25:17.138 END TEST spdkcli_nvmf_rdma 00:25:17.138 ************************************ 00:25:17.138 14:59:51 -- common/autotest_common.sh@1142 -- # return 0 00:25:17.138 14:59:51 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:25:17.138 14:59:51 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:25:17.138 14:59:51 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:25:17.138 14:59:51 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:25:17.138 14:59:51 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:25:17.138 14:59:51 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:25:17.138 14:59:51 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:25:17.138 14:59:51 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:25:17.138 14:59:51 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:25:17.138 14:59:51 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:25:17.138 14:59:51 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:25:17.138 14:59:51 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:25:17.138 14:59:51 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:25:17.138 14:59:51 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:25:17.138 14:59:51 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:25:17.138 14:59:51 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:25:17.138 14:59:51 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:25:17.138 14:59:51 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:17.138 14:59:51 -- common/autotest_common.sh@10 -- # set +x 00:25:17.138 14:59:51 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:25:17.138 14:59:51 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:25:17.138 14:59:51 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:25:17.138 14:59:51 -- common/autotest_common.sh@10 -- # set +x 00:25:21.323 INFO: APP EXITING 00:25:21.323 INFO: killing all VMs 00:25:21.323 INFO: killing vhost app 00:25:21.323 INFO: EXIT DONE 00:25:23.223 Waiting for block devices as requested 00:25:23.481 0000:5f:00.0 (8086 0a54): vfio-pci -> nvme 00:25:23.481 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:23.481 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:23.740 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:23.740 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:23.740 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:23.998 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:23.998 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:23.998 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:23.998 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:24.257 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:24.257 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:24.257 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:24.257 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:24.528 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:24.529 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:24.529 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:27.057 Cleaning 00:25:27.057 Removing: /var/run/dpdk/spdk0/config 00:25:27.057 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:25:27.057 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:25:27.057 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:25:27.057 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:25:27.057 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:25:27.057 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:25:27.057 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:25:27.057 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:25:27.057 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:25:27.057 Removing: /var/run/dpdk/spdk0/hugepage_info 00:25:27.057 Removing: /var/run/dpdk/spdk1/config 00:25:27.057 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:25:27.057 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:25:27.057 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:25:27.057 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:25:27.057 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:25:27.057 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:25:27.057 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:25:27.057 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:25:27.057 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:25:27.057 Removing: /var/run/dpdk/spdk1/hugepage_info 00:25:27.057 Removing: /var/run/dpdk/spdk1/mp_socket 00:25:27.057 Removing: /var/run/dpdk/spdk2/config 00:25:27.057 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:25:27.057 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:25:27.057 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:25:27.057 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:25:27.057 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:25:27.057 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:25:27.057 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:25:27.057 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:25:27.057 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:25:27.057 Removing: /var/run/dpdk/spdk2/hugepage_info 00:25:27.057 Removing: /var/run/dpdk/spdk3/config 00:25:27.057 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:25:27.057 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:25:27.057 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:25:27.057 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:25:27.057 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:25:27.057 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:25:27.057 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:25:27.057 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:25:27.057 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:25:27.057 Removing: /var/run/dpdk/spdk3/hugepage_info 00:25:27.057 Removing: /var/run/dpdk/spdk4/config 00:25:27.057 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:25:27.057 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:25:27.057 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:25:27.057 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:25:27.057 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:25:27.057 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:25:27.057 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:25:27.057 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:25:27.057 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:25:27.057 Removing: /var/run/dpdk/spdk4/hugepage_info 00:25:27.057 Removing: /dev/shm/bdevperf_trace.pid2780214 00:25:27.057 Removing: /dev/shm/bdevperf_trace.pid2886174 00:25:27.057 Removing: /dev/shm/bdev_svc_trace.1 00:25:27.057 Removing: /dev/shm/nvmf_trace.0 00:25:27.057 Removing: /dev/shm/spdk_tgt_trace.pid2672643 00:25:27.057 Removing: /var/run/dpdk/spdk0 00:25:27.057 Removing: /var/run/dpdk/spdk1 00:25:27.316 Removing: /var/run/dpdk/spdk2 00:25:27.316 Removing: /var/run/dpdk/spdk3 00:25:27.316 Removing: /var/run/dpdk/spdk4 00:25:27.316 Removing: /var/run/dpdk/spdk_pid2670283 00:25:27.316 Removing: /var/run/dpdk/spdk_pid2671359 00:25:27.316 Removing: /var/run/dpdk/spdk_pid2672643 00:25:27.316 Removing: /var/run/dpdk/spdk_pid2673272 00:25:27.316 Removing: /var/run/dpdk/spdk_pid2674226 00:25:27.316 Removing: /var/run/dpdk/spdk_pid2674462 00:25:27.316 Removing: /var/run/dpdk/spdk_pid2675433 00:25:27.316 Removing: /var/run/dpdk/spdk_pid2675632 00:25:27.316 Removing: /var/run/dpdk/spdk_pid2675788 00:25:27.316 Removing: /var/run/dpdk/spdk_pid2680512 00:25:27.316 Removing: /var/run/dpdk/spdk_pid2681923 00:25:27.316 Removing: /var/run/dpdk/spdk_pid2682272 00:25:27.316 Removing: /var/run/dpdk/spdk_pid2682575 00:25:27.316 Removing: /var/run/dpdk/spdk_pid2682880 00:25:27.316 Removing: /var/run/dpdk/spdk_pid2683171 00:25:27.316 Removing: /var/run/dpdk/spdk_pid2683421 00:25:27.316 Removing: /var/run/dpdk/spdk_pid2683669 00:25:27.316 Removing: /var/run/dpdk/spdk_pid2683950 00:25:27.316 Removing: /var/run/dpdk/spdk_pid2684696 00:25:27.316 Removing: /var/run/dpdk/spdk_pid2687684 00:25:27.316 Removing: /var/run/dpdk/spdk_pid2687944 00:25:27.316 Removing: /var/run/dpdk/spdk_pid2688206 00:25:27.316 Removing: /var/run/dpdk/spdk_pid2688348 00:25:27.316 Removing: /var/run/dpdk/spdk_pid2688712 00:25:27.316 Removing: /var/run/dpdk/spdk_pid2688932 00:25:27.316 Removing: /var/run/dpdk/spdk_pid2689233 00:25:27.316 Removing: /var/run/dpdk/spdk_pid2689445 00:25:27.316 Removing: /var/run/dpdk/spdk_pid2689703 00:25:27.316 Removing: /var/run/dpdk/spdk_pid2689935 00:25:27.316 Removing: /var/run/dpdk/spdk_pid2690035 00:25:27.316 Removing: /var/run/dpdk/spdk_pid2690209 00:25:27.316 Removing: /var/run/dpdk/spdk_pid2690760 00:25:27.316 Removing: /var/run/dpdk/spdk_pid2690987 00:25:27.316 Removing: /var/run/dpdk/spdk_pid2691292 00:25:27.316 Removing: /var/run/dpdk/spdk_pid2691566 00:25:27.316 Removing: /var/run/dpdk/spdk_pid2691592 00:25:27.316 Removing: /var/run/dpdk/spdk_pid2691658 00:25:27.316 Removing: /var/run/dpdk/spdk_pid2691928 00:25:27.316 Removing: /var/run/dpdk/spdk_pid2692197 00:25:27.316 Removing: /var/run/dpdk/spdk_pid2692476 00:25:27.316 Removing: /var/run/dpdk/spdk_pid2692754 00:25:27.316 Removing: /var/run/dpdk/spdk_pid2693021 00:25:27.316 Removing: /var/run/dpdk/spdk_pid2693276 00:25:27.316 Removing: /var/run/dpdk/spdk_pid2693542 00:25:27.316 Removing: /var/run/dpdk/spdk_pid2693805 00:25:27.316 Removing: /var/run/dpdk/spdk_pid2694051 00:25:27.316 Removing: /var/run/dpdk/spdk_pid2694315 00:25:27.316 Removing: /var/run/dpdk/spdk_pid2694585 00:25:27.316 Removing: /var/run/dpdk/spdk_pid2694845 00:25:27.316 Removing: /var/run/dpdk/spdk_pid2695122 00:25:27.316 Removing: /var/run/dpdk/spdk_pid2695443 00:25:27.316 Removing: /var/run/dpdk/spdk_pid2695742 00:25:27.316 Removing: /var/run/dpdk/spdk_pid2695995 00:25:27.316 Removing: /var/run/dpdk/spdk_pid2696243 00:25:27.316 Removing: /var/run/dpdk/spdk_pid2696502 00:25:27.316 Removing: /var/run/dpdk/spdk_pid2696901 00:25:27.316 Removing: /var/run/dpdk/spdk_pid2697375 00:25:27.316 Removing: /var/run/dpdk/spdk_pid2697457 00:25:27.316 Removing: /var/run/dpdk/spdk_pid2697828 00:25:27.316 Removing: /var/run/dpdk/spdk_pid2701640 00:25:27.316 Removing: /var/run/dpdk/spdk_pid2742222 00:25:27.316 Removing: /var/run/dpdk/spdk_pid2746045 00:25:27.316 Removing: /var/run/dpdk/spdk_pid2756507 00:25:27.316 Removing: /var/run/dpdk/spdk_pid2761662 00:25:27.316 Removing: /var/run/dpdk/spdk_pid2764990 00:25:27.316 Removing: /var/run/dpdk/spdk_pid2765887 00:25:27.316 Removing: /var/run/dpdk/spdk_pid2772356 00:25:27.316 Removing: /var/run/dpdk/spdk_pid2780214 00:25:27.316 Removing: /var/run/dpdk/spdk_pid2780516 00:25:27.316 Removing: /var/run/dpdk/spdk_pid2784470 00:25:27.316 Removing: /var/run/dpdk/spdk_pid2790103 00:25:27.317 Removing: /var/run/dpdk/spdk_pid2792701 00:25:27.317 Removing: /var/run/dpdk/spdk_pid2802324 00:25:27.317 Removing: /var/run/dpdk/spdk_pid2825903 00:25:27.317 Removing: /var/run/dpdk/spdk_pid2829220 00:25:27.317 Removing: /var/run/dpdk/spdk_pid2884187 00:25:27.317 Removing: /var/run/dpdk/spdk_pid2885101 00:25:27.317 Removing: /var/run/dpdk/spdk_pid2886174 00:25:27.317 Removing: /var/run/dpdk/spdk_pid2890234 00:25:27.317 Removing: /var/run/dpdk/spdk_pid2896846 00:25:27.317 Removing: /var/run/dpdk/spdk_pid2897760 00:25:27.317 Removing: /var/run/dpdk/spdk_pid2898673 00:25:27.317 Removing: /var/run/dpdk/spdk_pid2899595 00:25:27.317 Removing: /var/run/dpdk/spdk_pid2899839 00:25:27.317 Removing: /var/run/dpdk/spdk_pid2904092 00:25:27.317 Removing: /var/run/dpdk/spdk_pid2904180 00:25:27.317 Removing: /var/run/dpdk/spdk_pid2908536 00:25:27.317 Removing: /var/run/dpdk/spdk_pid2909000 00:25:27.317 Removing: /var/run/dpdk/spdk_pid2909560 00:25:27.317 Removing: /var/run/dpdk/spdk_pid2910326 00:25:27.317 Removing: /var/run/dpdk/spdk_pid2910374 00:25:27.317 Removing: /var/run/dpdk/spdk_pid2914849 00:25:27.317 Removing: /var/run/dpdk/spdk_pid2915423 00:25:27.575 Removing: /var/run/dpdk/spdk_pid2919520 00:25:27.575 Removing: /var/run/dpdk/spdk_pid2922277 00:25:27.575 Removing: /var/run/dpdk/spdk_pid2927494 00:25:27.575 Removing: /var/run/dpdk/spdk_pid2937450 00:25:27.575 Removing: /var/run/dpdk/spdk_pid2937455 00:25:27.575 Removing: /var/run/dpdk/spdk_pid2955707 00:25:27.575 Removing: /var/run/dpdk/spdk_pid2955943 00:25:27.575 Removing: /var/run/dpdk/spdk_pid2961552 00:25:27.575 Removing: /var/run/dpdk/spdk_pid2962056 00:25:27.575 Removing: /var/run/dpdk/spdk_pid2963935 00:25:27.575 Removing: /var/run/dpdk/spdk_pid2966965 00:25:27.575 Clean 00:25:27.575 15:00:01 -- common/autotest_common.sh@1451 -- # return 0 00:25:27.575 15:00:01 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:25:27.575 15:00:01 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:27.575 15:00:01 -- common/autotest_common.sh@10 -- # set +x 00:25:27.575 15:00:01 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:25:27.575 15:00:01 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:27.575 15:00:01 -- common/autotest_common.sh@10 -- # set +x 00:25:27.575 15:00:01 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:25:27.575 15:00:01 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log ]] 00:25:27.575 15:00:01 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log 00:25:27.575 15:00:01 -- spdk/autotest.sh@391 -- # hash lcov 00:25:27.575 15:00:01 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:25:27.575 15:00:01 -- spdk/autotest.sh@393 -- # hostname 00:25:27.575 15:00:01 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -t spdk-wfp-05 -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info 00:25:27.833 geninfo: WARNING: invalid characters removed from testname! 00:25:49.739 15:00:19 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:25:49.739 15:00:22 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:25:50.303 15:00:24 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:25:52.214 15:00:25 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:25:53.594 15:00:27 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:25:55.497 15:00:29 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:25:56.871 15:00:30 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:25:57.128 15:00:30 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:25:57.128 15:00:30 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:25:57.128 15:00:30 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:57.128 15:00:30 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:57.128 15:00:30 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:57.128 15:00:30 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:57.128 15:00:30 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:57.128 15:00:30 -- paths/export.sh@5 -- $ export PATH 00:25:57.128 15:00:30 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:57.128 15:00:30 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:25:57.128 15:00:30 -- common/autobuild_common.sh@444 -- $ date +%s 00:25:57.128 15:00:30 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721048430.XXXXXX 00:25:57.128 15:00:30 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721048430.YSlT7s 00:25:57.128 15:00:30 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:25:57.128 15:00:30 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:25:57.128 15:00:30 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/' 00:25:57.128 15:00:30 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:25:57.128 15:00:30 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:25:57.128 15:00:30 -- common/autobuild_common.sh@460 -- $ get_config_params 00:25:57.128 15:00:30 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:25:57.128 15:00:30 -- common/autotest_common.sh@10 -- $ set +x 00:25:57.128 15:00:30 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:25:57.128 15:00:30 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:25:57.128 15:00:30 -- pm/common@17 -- $ local monitor 00:25:57.128 15:00:30 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:57.128 15:00:30 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:57.128 15:00:30 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:57.128 15:00:30 -- pm/common@21 -- $ date +%s 00:25:57.128 15:00:30 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:57.128 15:00:30 -- pm/common@21 -- $ date +%s 00:25:57.128 15:00:30 -- pm/common@25 -- $ sleep 1 00:25:57.128 15:00:30 -- pm/common@21 -- $ date +%s 00:25:57.128 15:00:30 -- pm/common@21 -- $ date +%s 00:25:57.128 15:00:30 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721048430 00:25:57.128 15:00:30 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721048430 00:25:57.128 15:00:30 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721048430 00:25:57.128 15:00:30 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721048430 00:25:57.128 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721048430_collect-vmstat.pm.log 00:25:57.128 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721048430_collect-cpu-load.pm.log 00:25:57.128 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721048430_collect-cpu-temp.pm.log 00:25:57.128 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721048430_collect-bmc-pm.bmc.pm.log 00:25:58.061 15:00:31 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:25:58.061 15:00:31 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j96 00:25:58.061 15:00:31 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:25:58.061 15:00:31 -- spdk/autopackage.sh@13 -- $ [[ '' -eq 1 ]] 00:25:58.061 15:00:31 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:25:58.061 15:00:31 -- spdk/autopackage.sh@19 -- $ timing_finish 00:25:58.061 15:00:31 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:25:58.061 15:00:31 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:25:58.061 15:00:31 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:25:58.061 15:00:31 -- spdk/autopackage.sh@20 -- $ exit 0 00:25:58.061 15:00:31 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:25:58.061 15:00:31 -- pm/common@29 -- $ signal_monitor_resources TERM 00:25:58.061 15:00:31 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:25:58.061 15:00:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:58.061 15:00:31 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:25:58.061 15:00:31 -- pm/common@44 -- $ pid=2981468 00:25:58.061 15:00:31 -- pm/common@50 -- $ kill -TERM 2981468 00:25:58.061 15:00:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:58.062 15:00:31 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:25:58.062 15:00:31 -- pm/common@44 -- $ pid=2981469 00:25:58.062 15:00:31 -- pm/common@50 -- $ kill -TERM 2981469 00:25:58.062 15:00:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:58.062 15:00:31 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:25:58.062 15:00:31 -- pm/common@44 -- $ pid=2981472 00:25:58.062 15:00:31 -- pm/common@50 -- $ kill -TERM 2981472 00:25:58.062 15:00:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:58.062 15:00:31 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:25:58.062 15:00:31 -- pm/common@44 -- $ pid=2981497 00:25:58.062 15:00:31 -- pm/common@50 -- $ sudo -E kill -TERM 2981497 00:25:58.062 + [[ -n 2566776 ]] 00:25:58.062 + sudo kill 2566776 00:25:58.070 [Pipeline] } 00:25:58.089 [Pipeline] // stage 00:25:58.095 [Pipeline] } 00:25:58.114 [Pipeline] // timeout 00:25:58.119 [Pipeline] } 00:25:58.133 [Pipeline] // catchError 00:25:58.138 [Pipeline] } 00:25:58.158 [Pipeline] // wrap 00:25:58.165 [Pipeline] } 00:25:58.183 [Pipeline] // catchError 00:25:58.191 [Pipeline] stage 00:25:58.193 [Pipeline] { (Epilogue) 00:25:58.207 [Pipeline] catchError 00:25:58.209 [Pipeline] { 00:25:58.223 [Pipeline] echo 00:25:58.224 Cleanup processes 00:25:58.229 [Pipeline] sh 00:25:58.521 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:25:58.521 2981587 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/sdr.cache 00:25:58.521 2981871 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:25:58.536 [Pipeline] sh 00:25:58.811 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:25:58.811 ++ grep -v 'sudo pgrep' 00:25:58.811 ++ awk '{print $1}' 00:25:58.811 + sudo kill -9 2981587 00:25:58.821 [Pipeline] sh 00:25:59.100 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:26:07.303 [Pipeline] sh 00:26:07.581 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:26:07.582 Artifacts sizes are good 00:26:07.596 [Pipeline] archiveArtifacts 00:26:07.604 Archiving artifacts 00:26:07.727 [Pipeline] sh 00:26:08.004 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-phy-autotest 00:26:08.021 [Pipeline] cleanWs 00:26:08.033 [WS-CLEANUP] Deleting project workspace... 00:26:08.033 [WS-CLEANUP] Deferred wipeout is used... 00:26:08.041 [WS-CLEANUP] done 00:26:08.043 [Pipeline] } 00:26:08.064 [Pipeline] // catchError 00:26:08.075 [Pipeline] sh 00:26:08.349 + logger -p user.info -t JENKINS-CI 00:26:08.358 [Pipeline] } 00:26:08.374 [Pipeline] // stage 00:26:08.379 [Pipeline] } 00:26:08.396 [Pipeline] // node 00:26:08.401 [Pipeline] End of Pipeline 00:26:08.432 Finished: SUCCESS